Feb 02 06:46:13 crc systemd[1]: Starting Kubernetes Kubelet... Feb 02 06:46:13 crc restorecon[4814]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 06:46:14 crc restorecon[4814]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 02 06:46:15 crc kubenswrapper[4842]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 06:46:15 crc kubenswrapper[4842]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 02 06:46:15 crc kubenswrapper[4842]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 06:46:15 crc kubenswrapper[4842]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 06:46:15 crc kubenswrapper[4842]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 02 06:46:15 crc kubenswrapper[4842]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.152161 4842 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157056 4842 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157089 4842 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157099 4842 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157109 4842 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157119 4842 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157127 4842 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157136 4842 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157145 4842 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157154 4842 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157162 4842 feature_gate.go:330] unrecognized feature gate: Example Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157196 4842 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157204 4842 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157213 4842 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157229 4842 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157265 4842 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157276 4842 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157285 4842 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157295 4842 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157303 4842 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157313 4842 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157323 4842 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157331 4842 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157340 4842 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157348 4842 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157356 4842 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157364 4842 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157372 4842 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157380 4842 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157388 4842 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157395 4842 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157403 4842 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157411 4842 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157418 4842 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157426 4842 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157434 4842 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157441 4842 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157451 4842 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157459 4842 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157467 4842 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157474 4842 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157482 4842 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157489 4842 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157496 4842 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157504 4842 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157512 4842 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157519 4842 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157619 4842 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157934 4842 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157945 4842 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157953 4842 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157971 4842 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157980 4842 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158087 4842 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158096 4842 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158105 4842 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158120 4842 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158129 4842 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158137 4842 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158152 4842 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158162 4842 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158186 4842 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158194 4842 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158202 4842 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158210 4842 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158224 4842 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158252 4842 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158261 4842 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158269 4842 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158284 4842 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158294 4842 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158303 4842 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158563 4842 flags.go:64] FLAG: --address="0.0.0.0" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158581 4842 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158594 4842 flags.go:64] FLAG: --anonymous-auth="true" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158615 4842 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158626 4842 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158637 4842 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158648 4842 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158660 4842 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158670 4842 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158680 4842 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158691 4842 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158711 4842 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158721 4842 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158730 4842 flags.go:64] FLAG: --cgroup-root="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158740 4842 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158750 4842 flags.go:64] FLAG: --client-ca-file="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158760 4842 flags.go:64] FLAG: --cloud-config="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158769 4842 flags.go:64] FLAG: --cloud-provider="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158778 4842 flags.go:64] FLAG: --cluster-dns="[]" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158790 4842 flags.go:64] FLAG: --cluster-domain="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158806 4842 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158816 4842 flags.go:64] FLAG: --config-dir="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158825 4842 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158835 4842 flags.go:64] FLAG: --container-log-max-files="5" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158848 4842 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158858 4842 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158868 4842 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158878 4842 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158894 4842 flags.go:64] FLAG: --contention-profiling="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158905 4842 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158915 4842 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158925 4842 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158934 4842 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158946 4842 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158955 4842 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158965 4842 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158975 4842 flags.go:64] FLAG: --enable-load-reader="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158993 4842 flags.go:64] FLAG: --enable-server="true" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159002 4842 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159015 4842 flags.go:64] FLAG: --event-burst="100" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159025 4842 flags.go:64] FLAG: --event-qps="50" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159034 4842 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159044 4842 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159057 4842 flags.go:64] FLAG: --eviction-hard="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159068 4842 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159086 4842 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159096 4842 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159108 4842 flags.go:64] FLAG: --eviction-soft="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159117 4842 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159127 4842 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159136 4842 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159145 4842 flags.go:64] FLAG: --experimental-mounter-path="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159155 4842 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159165 4842 flags.go:64] FLAG: --fail-swap-on="true" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159344 4842 flags.go:64] FLAG: --feature-gates="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159405 4842 flags.go:64] FLAG: --file-check-frequency="20s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159426 4842 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159442 4842 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159456 4842 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159530 4842 flags.go:64] FLAG: --healthz-port="10248" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159544 4842 flags.go:64] FLAG: --help="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159554 4842 flags.go:64] FLAG: --hostname-override="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159564 4842 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159971 4842 flags.go:64] FLAG: --http-check-frequency="20s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159996 4842 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160007 4842 flags.go:64] FLAG: --image-credential-provider-config="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160016 4842 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160025 4842 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160036 4842 flags.go:64] FLAG: --image-service-endpoint="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160049 4842 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160061 4842 flags.go:64] FLAG: --kube-api-burst="100" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160073 4842 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160086 4842 flags.go:64] FLAG: --kube-api-qps="50" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160097 4842 flags.go:64] FLAG: --kube-reserved="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160110 4842 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160121 4842 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160130 4842 flags.go:64] FLAG: --kubelet-cgroups="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160140 4842 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160149 4842 flags.go:64] FLAG: --lock-file="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160158 4842 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160168 4842 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160178 4842 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160195 4842 flags.go:64] FLAG: --log-json-split-stream="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160204 4842 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160213 4842 flags.go:64] FLAG: --log-text-split-stream="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160254 4842 flags.go:64] FLAG: --logging-format="text" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160264 4842 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160275 4842 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160284 4842 flags.go:64] FLAG: --manifest-url="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160293 4842 flags.go:64] FLAG: --manifest-url-header="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160309 4842 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160319 4842 flags.go:64] FLAG: --max-open-files="1000000" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160331 4842 flags.go:64] FLAG: --max-pods="110" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160340 4842 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160350 4842 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160361 4842 flags.go:64] FLAG: --memory-manager-policy="None" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160370 4842 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160380 4842 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160390 4842 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160400 4842 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160436 4842 flags.go:64] FLAG: --node-status-max-images="50" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160445 4842 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160455 4842 flags.go:64] FLAG: --oom-score-adj="-999" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160464 4842 flags.go:64] FLAG: --pod-cidr="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160473 4842 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160488 4842 flags.go:64] FLAG: --pod-manifest-path="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160497 4842 flags.go:64] FLAG: --pod-max-pids="-1" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160506 4842 flags.go:64] FLAG: --pods-per-core="0" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160515 4842 flags.go:64] FLAG: --port="10250" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160524 4842 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160533 4842 flags.go:64] FLAG: --provider-id="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160542 4842 flags.go:64] FLAG: --qos-reserved="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160551 4842 flags.go:64] FLAG: --read-only-port="10255" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160560 4842 flags.go:64] FLAG: --register-node="true" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160569 4842 flags.go:64] FLAG: --register-schedulable="true" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160580 4842 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160598 4842 flags.go:64] FLAG: --registry-burst="10" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160608 4842 flags.go:64] FLAG: --registry-qps="5" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160617 4842 flags.go:64] FLAG: --reserved-cpus="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160626 4842 flags.go:64] FLAG: --reserved-memory="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160638 4842 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160647 4842 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160657 4842 flags.go:64] FLAG: --rotate-certificates="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160666 4842 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160675 4842 flags.go:64] FLAG: --runonce="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160684 4842 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160693 4842 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160703 4842 flags.go:64] FLAG: --seccomp-default="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160711 4842 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160720 4842 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160730 4842 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160739 4842 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160748 4842 flags.go:64] FLAG: --storage-driver-password="root" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160757 4842 flags.go:64] FLAG: --storage-driver-secure="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160766 4842 flags.go:64] FLAG: --storage-driver-table="stats" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160776 4842 flags.go:64] FLAG: --storage-driver-user="root" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160785 4842 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160794 4842 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160803 4842 flags.go:64] FLAG: --system-cgroups="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160812 4842 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160829 4842 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160838 4842 flags.go:64] FLAG: --tls-cert-file="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160847 4842 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160864 4842 flags.go:64] FLAG: --tls-min-version="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160874 4842 flags.go:64] FLAG: --tls-private-key-file="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160883 4842 flags.go:64] FLAG: --topology-manager-policy="none" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160892 4842 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160901 4842 flags.go:64] FLAG: --topology-manager-scope="container" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160910 4842 flags.go:64] FLAG: --v="2" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160930 4842 flags.go:64] FLAG: --version="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160943 4842 flags.go:64] FLAG: --vmodule="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160954 4842 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160965 4842 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161286 4842 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161300 4842 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161309 4842 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161317 4842 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161328 4842 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161336 4842 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161344 4842 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161351 4842 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161359 4842 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161367 4842 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161374 4842 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161382 4842 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161389 4842 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161397 4842 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161407 4842 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161419 4842 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161429 4842 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161440 4842 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161450 4842 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161460 4842 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161474 4842 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161487 4842 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161498 4842 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161507 4842 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161515 4842 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161523 4842 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161530 4842 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161539 4842 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161551 4842 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161559 4842 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161567 4842 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161578 4842 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161588 4842 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161596 4842 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161605 4842 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161614 4842 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161622 4842 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161630 4842 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161637 4842 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161645 4842 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161652 4842 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161660 4842 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161668 4842 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161676 4842 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161683 4842 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161691 4842 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161699 4842 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161706 4842 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161735 4842 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161743 4842 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161750 4842 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161758 4842 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161766 4842 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161773 4842 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161781 4842 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161789 4842 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161799 4842 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161809 4842 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161817 4842 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161825 4842 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161836 4842 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161847 4842 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161854 4842 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161862 4842 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161870 4842 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161878 4842 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161886 4842 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161893 4842 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161901 4842 feature_gate.go:330] unrecognized feature gate: Example Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161908 4842 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161915 4842 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.161941 4842 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.174535 4842 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.174597 4842 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174742 4842 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174764 4842 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174775 4842 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174789 4842 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174801 4842 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174812 4842 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174821 4842 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174830 4842 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174839 4842 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174848 4842 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174857 4842 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174865 4842 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174873 4842 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174881 4842 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174888 4842 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174897 4842 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174905 4842 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174912 4842 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174920 4842 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174928 4842 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174937 4842 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174946 4842 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174954 4842 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174962 4842 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174971 4842 feature_gate.go:330] unrecognized feature gate: Example Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174979 4842 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174988 4842 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174996 4842 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175005 4842 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175015 4842 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175025 4842 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175035 4842 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175044 4842 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175052 4842 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175063 4842 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175071 4842 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175082 4842 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175091 4842 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175099 4842 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175107 4842 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175115 4842 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175123 4842 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175131 4842 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175138 4842 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175146 4842 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175154 4842 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175162 4842 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175169 4842 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175177 4842 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175184 4842 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175192 4842 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175200 4842 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175209 4842 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175225 4842 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175264 4842 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175275 4842 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175285 4842 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175294 4842 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175301 4842 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175309 4842 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175316 4842 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175325 4842 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175333 4842 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175341 4842 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175349 4842 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175356 4842 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175364 4842 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175371 4842 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175379 4842 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175387 4842 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175397 4842 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.175411 4842 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176642 4842 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176665 4842 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176675 4842 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176683 4842 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176692 4842 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176700 4842 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176709 4842 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176717 4842 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176726 4842 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176738 4842 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176749 4842 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176758 4842 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176767 4842 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176776 4842 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176785 4842 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176794 4842 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176803 4842 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176811 4842 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176819 4842 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176827 4842 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176835 4842 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176842 4842 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176850 4842 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176857 4842 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176865 4842 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176873 4842 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176883 4842 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176893 4842 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176902 4842 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176910 4842 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176918 4842 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176926 4842 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176934 4842 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176942 4842 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176952 4842 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176962 4842 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176970 4842 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176978 4842 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176985 4842 feature_gate.go:330] unrecognized feature gate: Example Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176994 4842 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177001 4842 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177009 4842 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177016 4842 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177024 4842 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177031 4842 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177039 4842 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177046 4842 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177054 4842 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177062 4842 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177070 4842 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177081 4842 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177091 4842 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177100 4842 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177110 4842 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177118 4842 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177127 4842 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177135 4842 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177143 4842 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177152 4842 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177160 4842 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177168 4842 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177176 4842 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177184 4842 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177191 4842 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177199 4842 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177206 4842 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177222 4842 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177255 4842 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177266 4842 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177275 4842 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177284 4842 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.177297 4842 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.178540 4842 server.go:940] "Client rotation is on, will bootstrap in background" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.184581 4842 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.185433 4842 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.187462 4842 server.go:997] "Starting client certificate rotation" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.187511 4842 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.188576 4842 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-12 16:45:10.284367695 +0000 UTC Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.188727 4842 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.217882 4842 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 06:46:15 crc kubenswrapper[4842]: E0202 06:46:15.221991 4842 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.223289 4842 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.247523 4842 log.go:25] "Validated CRI v1 runtime API" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.287964 4842 log.go:25] "Validated CRI v1 image API" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.290603 4842 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.298172 4842 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-02-06-36-55-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.298280 4842 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.330019 4842 manager.go:217] Machine: {Timestamp:2026-02-02 06:46:15.326431288 +0000 UTC m=+0.703699280 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:a2d9b7d5-4deb-436c-8c47-643b2c87256c BootID:46282451-0a80-4a55-be60-279b5a40f455 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:e3:ab:6e Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:e3:ab:6e Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:29:42:54 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:60:51:e6 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:c4:6e:4b Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:59:b4:49 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:3a:82:4c Speed:-1 Mtu:1496} {Name:ens7.44 MacAddress:52:54:00:da:29:a6 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:fa:dc:c0:b5:f3:ef Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:36:d5:88:bc:b8:06 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.330533 4842 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.330727 4842 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.331180 4842 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.331603 4842 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.331672 4842 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.332036 4842 topology_manager.go:138] "Creating topology manager with none policy" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.332053 4842 container_manager_linux.go:303] "Creating device plugin manager" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.332673 4842 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.332725 4842 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.333539 4842 state_mem.go:36] "Initialized new in-memory state store" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.333766 4842 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.338668 4842 kubelet.go:418] "Attempting to sync node with API server" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.338702 4842 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.338742 4842 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.338763 4842 kubelet.go:324] "Adding apiserver pod source" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.338783 4842 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.343603 4842 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.344748 4842 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.345699 4842 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Feb 02 06:46:15 crc kubenswrapper[4842]: E0202 06:46:15.345842 4842 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.345898 4842 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Feb 02 06:46:15 crc kubenswrapper[4842]: E0202 06:46:15.345954 4842 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.347708 4842 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.349724 4842 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.349775 4842 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.349806 4842 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.349828 4842 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.349868 4842 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.349889 4842 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.349904 4842 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.349926 4842 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.349943 4842 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.349957 4842 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.350001 4842 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.350015 4842 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.350958 4842 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.352096 4842 server.go:1280] "Started kubelet" Feb 02 06:46:15 crc systemd[1]: Started Kubernetes Kubelet. Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.353938 4842 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.353547 4842 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.354114 4842 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.357112 4842 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.357642 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.357708 4842 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.357725 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 02:33:25.908262688 +0000 UTC Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.357898 4842 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.357948 4842 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.358159 4842 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 02 06:46:15 crc kubenswrapper[4842]: E0202 06:46:15.358182 4842 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.363293 4842 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.363340 4842 factory.go:55] Registering systemd factory Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.363361 4842 factory.go:221] Registration of the systemd container factory successfully Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.364893 4842 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Feb 02 06:46:15 crc kubenswrapper[4842]: E0202 06:46:15.365290 4842 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.365576 4842 factory.go:153] Registering CRI-O factory Feb 02 06:46:15 crc kubenswrapper[4842]: E0202 06:46:15.365413 4842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="200ms" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.365616 4842 factory.go:221] Registration of the crio container factory successfully Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.365788 4842 factory.go:103] Registering Raw factory Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.365823 4842 manager.go:1196] Started watching for new ooms in manager Feb 02 06:46:15 crc kubenswrapper[4842]: E0202 06:46:15.366417 4842 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.169:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18905b0f6c071ff5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 06:46:15.351648245 +0000 UTC m=+0.728916197,LastTimestamp:2026-02-02 06:46:15.351648245 +0000 UTC m=+0.728916197,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.374315 4842 manager.go:319] Starting recovery of all containers Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.374820 4842 server.go:460] "Adding debug handlers to kubelet server" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.384502 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.384603 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.384630 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.384652 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.384672 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.384695 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.384759 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.384793 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.384821 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.384841 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.384860 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.384880 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.384904 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.384937 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.384969 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.384997 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385024 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385046 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385064 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385088 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385155 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385184 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385229 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385293 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385322 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385351 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385387 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385417 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385449 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385481 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385510 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385537 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385565 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385592 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385618 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385646 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385673 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385701 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385726 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385753 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385779 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385808 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385835 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385863 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385892 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385919 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385946 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385973 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386001 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386032 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386058 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386084 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386118 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386146 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386176 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386208 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386277 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386310 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386335 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386361 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386390 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386417 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386446 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386473 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386502 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386529 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386555 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386584 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386613 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386643 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386672 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386699 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386726 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386754 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386778 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386803 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386830 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386857 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386885 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386912 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386939 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386962 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386986 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387012 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387037 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387064 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387153 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387180 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387206 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387301 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387333 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387361 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387390 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387418 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387446 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387486 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387521 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387548 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387575 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387599 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387632 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387658 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387690 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387715 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387819 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387857 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387887 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387916 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387944 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387973 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388003 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388030 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388060 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388090 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388116 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388153 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388180 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388206 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388278 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388309 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388333 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388355 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388378 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388405 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388431 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388457 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388482 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388506 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388533 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388572 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388596 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388635 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388663 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388686 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388714 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388738 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388761 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388795 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388826 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388853 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388877 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388901 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388928 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388951 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388976 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389000 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389026 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389051 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389092 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389120 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389149 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389174 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389203 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389269 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389298 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389323 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389347 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389371 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389413 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389439 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389487 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389532 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389565 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389590 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389617 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389655 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389683 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389709 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389733 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389762 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389786 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389810 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389848 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393400 4842 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393461 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393491 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393519 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393545 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393570 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393604 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393629 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393653 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393677 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393702 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393726 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393751 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393778 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393835 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393867 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393894 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393970 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393996 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394034 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394059 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394085 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394118 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394158 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394185 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394225 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394319 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394345 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394373 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394397 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394422 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394460 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394488 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394518 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394542 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394606 4842 reconstruct.go:97] "Volume reconstruction finished" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394624 4842 reconciler.go:26] "Reconciler: start to sync state" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.412889 4842 manager.go:324] Recovery completed Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.428891 4842 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.432126 4842 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.432184 4842 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.432226 4842 kubelet.go:2335] "Starting kubelet main sync loop" Feb 02 06:46:15 crc kubenswrapper[4842]: E0202 06:46:15.432372 4842 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.434893 4842 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Feb 02 06:46:15 crc kubenswrapper[4842]: E0202 06:46:15.434994 4842 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.436017 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.438399 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.438443 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.438457 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.439880 4842 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.439925 4842 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.439962 4842 state_mem.go:36] "Initialized new in-memory state store" Feb 02 06:46:15 crc kubenswrapper[4842]: E0202 06:46:15.458933 4842 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.468528 4842 policy_none.go:49] "None policy: Start" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.469923 4842 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.469988 4842 state_mem.go:35] "Initializing new in-memory state store" Feb 02 06:46:15 crc kubenswrapper[4842]: E0202 06:46:15.532870 4842 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.543853 4842 manager.go:334] "Starting Device Plugin manager" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.543922 4842 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.543942 4842 server.go:79] "Starting device plugin registration server" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.544536 4842 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.544565 4842 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.544989 4842 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.545107 4842 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.545127 4842 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 02 06:46:15 crc kubenswrapper[4842]: E0202 06:46:15.562402 4842 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 06:46:15 crc kubenswrapper[4842]: E0202 06:46:15.567397 4842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="400ms" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.645787 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.647919 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.648020 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.648041 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.648112 4842 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 06:46:15 crc kubenswrapper[4842]: E0202 06:46:15.649079 4842 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.169:6443: connect: connection refused" node="crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.733742 4842 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.733895 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.735692 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.735763 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.735787 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.736025 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.736704 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.737043 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.737618 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.737682 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.737707 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.737930 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.738194 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.738324 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.739921 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.740000 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.740019 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.740015 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.740067 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.740087 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.739937 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.740315 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.740337 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.740512 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.740772 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.740833 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.741896 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.741996 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.742080 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.742312 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.742366 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.742383 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.742777 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.742876 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.743315 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.744590 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.744639 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.744656 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.744845 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.744886 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.744983 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.745025 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.745049 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.746284 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.746344 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.746358 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.800888 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.801071 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.801144 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.801276 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.801374 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.801460 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.801551 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.801638 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.801688 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.801779 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.801869 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.801962 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.802057 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.802146 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.802257 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.850024 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.852086 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.852138 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.852158 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.852200 4842 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 06:46:15 crc kubenswrapper[4842]: E0202 06:46:15.852884 4842 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.169:6443: connect: connection refused" node="crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.903702 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.903812 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.903978 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.904032 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.904021 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.904209 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.904285 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.904280 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.904554 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.904627 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.904725 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.904748 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.904834 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.904913 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.904927 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.904965 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.905044 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.905103 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.905139 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.905184 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.905288 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.905385 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.905391 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.905443 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.905502 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.905594 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.905632 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.905687 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.905736 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.905858 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: E0202 06:46:15.968381 4842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="800ms" Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.079907 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.111692 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 02 06:46:16 crc kubenswrapper[4842]: W0202 06:46:16.136352 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-5e8a1f67e76476fa64fe449d93c5909260e5813495076fa4636a20befed96cc0 WatchSource:0}: Error finding container 5e8a1f67e76476fa64fe449d93c5909260e5813495076fa4636a20befed96cc0: Status 404 returned error can't find the container with id 5e8a1f67e76476fa64fe449d93c5909260e5813495076fa4636a20befed96cc0 Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.139948 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.163406 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:46:16 crc kubenswrapper[4842]: W0202 06:46:16.166917 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-9d5cdb5a57df8569b9c795bd8148799ef4da980f44b7107759bc18c540551c35 WatchSource:0}: Error finding container 9d5cdb5a57df8569b9c795bd8148799ef4da980f44b7107759bc18c540551c35: Status 404 returned error can't find the container with id 9d5cdb5a57df8569b9c795bd8148799ef4da980f44b7107759bc18c540551c35 Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.179044 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 06:46:16 crc kubenswrapper[4842]: W0202 06:46:16.186324 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-911e169d4d0263ff322625581adbbd5bb1b645fe9e3dab91baa8403eaddfe396 WatchSource:0}: Error finding container 911e169d4d0263ff322625581adbbd5bb1b645fe9e3dab91baa8403eaddfe396: Status 404 returned error can't find the container with id 911e169d4d0263ff322625581adbbd5bb1b645fe9e3dab91baa8403eaddfe396 Feb 02 06:46:16 crc kubenswrapper[4842]: W0202 06:46:16.206698 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-39b17e24da7a6df95f8f6cae05a233775a4d684345b7277358b4ba14b5cc25e5 WatchSource:0}: Error finding container 39b17e24da7a6df95f8f6cae05a233775a4d684345b7277358b4ba14b5cc25e5: Status 404 returned error can't find the container with id 39b17e24da7a6df95f8f6cae05a233775a4d684345b7277358b4ba14b5cc25e5 Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.253946 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.256311 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.256362 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.256392 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.256436 4842 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 06:46:16 crc kubenswrapper[4842]: E0202 06:46:16.257209 4842 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.169:6443: connect: connection refused" node="crc" Feb 02 06:46:16 crc kubenswrapper[4842]: W0202 06:46:16.261391 4842 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Feb 02 06:46:16 crc kubenswrapper[4842]: E0202 06:46:16.261592 4842 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.356513 4842 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.358808 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 18:14:29.717778861 +0000 UTC Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.437989 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"5e8a1f67e76476fa64fe449d93c5909260e5813495076fa4636a20befed96cc0"} Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.439444 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"39b17e24da7a6df95f8f6cae05a233775a4d684345b7277358b4ba14b5cc25e5"} Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.440742 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"911e169d4d0263ff322625581adbbd5bb1b645fe9e3dab91baa8403eaddfe396"} Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.443996 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9d5cdb5a57df8569b9c795bd8148799ef4da980f44b7107759bc18c540551c35"} Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.446413 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0443add00b8f4fb80a07e481f140e82798e6760a04afde71ce4c66bedae993fb"} Feb 02 06:46:16 crc kubenswrapper[4842]: W0202 06:46:16.453246 4842 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Feb 02 06:46:16 crc kubenswrapper[4842]: E0202 06:46:16.453338 4842 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Feb 02 06:46:16 crc kubenswrapper[4842]: W0202 06:46:16.471483 4842 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Feb 02 06:46:16 crc kubenswrapper[4842]: E0202 06:46:16.471594 4842 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Feb 02 06:46:16 crc kubenswrapper[4842]: W0202 06:46:16.558121 4842 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Feb 02 06:46:16 crc kubenswrapper[4842]: E0202 06:46:16.558333 4842 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Feb 02 06:46:16 crc kubenswrapper[4842]: E0202 06:46:16.769038 4842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="1.6s" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.057443 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.060302 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.060412 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.060428 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.060517 4842 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 06:46:17 crc kubenswrapper[4842]: E0202 06:46:17.061679 4842 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.169:6443: connect: connection refused" node="crc" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.247131 4842 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 06:46:17 crc kubenswrapper[4842]: E0202 06:46:17.249681 4842 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.355248 4842 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.359334 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 16:09:11.099302836 +0000 UTC Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.453815 4842 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d" exitCode=0 Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.454043 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.453962 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d"} Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.458182 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.458320 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.458347 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.465123 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd"} Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.465259 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b"} Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.465296 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf"} Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.467081 4842 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45" exitCode=0 Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.467204 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.467381 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45"} Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.468151 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.468179 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.468191 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.471212 4842 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="618f6f6d52e9588bd7ddbd245c55dfef433902618db7d9aacf19b742debaba1d" exitCode=0 Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.471333 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"618f6f6d52e9588bd7ddbd245c55dfef433902618db7d9aacf19b742debaba1d"} Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.471441 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.471807 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.472837 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.472864 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.472877 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.473741 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.473797 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.473816 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.484830 4842 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="eedd0bd7e5b861fdac2d584e9a2854d8936e487a22fbee9364b4203fc22d1205" exitCode=0 Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.484898 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"eedd0bd7e5b861fdac2d584e9a2854d8936e487a22fbee9364b4203fc22d1205"} Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.484981 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.486850 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.486912 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.486934 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:18 crc kubenswrapper[4842]: W0202 06:46:18.147521 4842 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Feb 02 06:46:18 crc kubenswrapper[4842]: E0202 06:46:18.147695 4842 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.355353 4842 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.359460 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 05:27:52.169052083 +0000 UTC Feb 02 06:46:18 crc kubenswrapper[4842]: E0202 06:46:18.369693 4842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="3.2s" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.493717 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518"} Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.493777 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5"} Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.493793 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee"} Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.493805 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe"} Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.499859 4842 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ef728f328ecc7ea05eff1fe86deb439e0a78e677a87a42e0382395ad1b32e254" exitCode=0 Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.500160 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.500265 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ef728f328ecc7ea05eff1fe86deb439e0a78e677a87a42e0382395ad1b32e254"} Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.501616 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.501650 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.501662 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.504948 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"02e0a8355ba524fc2aaaf4ceb6c28d2560fcc506a7159f80193563692812f3b0"} Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.505086 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.511015 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.511057 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.511067 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.537626 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"87f2d3d4011b1076ea5c6892ec39059c3c43c73860bae0828cd0fa3b2c86cccb"} Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.537691 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"356ee9ccf90dd6a4aade1846889e97e195457f8a54c572eb8c8fd216fb5315f2"} Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.537704 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7bea776dbb154f5435006d46f8f410c0b0cb8c955f594cf39e4b707d4d99e619"} Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.538034 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.539192 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.539260 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.539274 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.543435 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588"} Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.543577 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.546920 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.546958 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.546971 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:18 crc kubenswrapper[4842]: W0202 06:46:18.576267 4842 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Feb 02 06:46:18 crc kubenswrapper[4842]: E0202 06:46:18.576388 4842 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.662776 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.663984 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.664038 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.664057 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.664098 4842 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 06:46:18 crc kubenswrapper[4842]: E0202 06:46:18.664697 4842 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.169:6443: connect: connection refused" node="crc" Feb 02 06:46:18 crc kubenswrapper[4842]: W0202 06:46:18.906846 4842 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Feb 02 06:46:18 crc kubenswrapper[4842]: E0202 06:46:18.906988 4842 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.360292 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 23:07:23.87655444 +0000 UTC Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.550647 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169"} Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.550916 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.552887 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.552927 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.552943 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.554465 4842 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="4ff7d2c230b7ef8d5dae5a246f049192db6652d55aeae25115de2041dbb3be74" exitCode=0 Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.554554 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.554584 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.554704 4842 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.554764 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.554634 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"4ff7d2c230b7ef8d5dae5a246f049192db6652d55aeae25115de2041dbb3be74"} Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.554868 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.556616 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.556653 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.556670 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.556697 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.556845 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.556869 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.557776 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.557810 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.557830 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.557879 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.557902 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.557911 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:20 crc kubenswrapper[4842]: I0202 06:46:20.361060 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 20:24:11.316834399 +0000 UTC Feb 02 06:46:20 crc kubenswrapper[4842]: I0202 06:46:20.423711 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:46:20 crc kubenswrapper[4842]: I0202 06:46:20.567211 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ad7e16aa26380210f6e5a17aba39b2e15ff5b543a25247c7222f05c398888fbe"} Feb 02 06:46:20 crc kubenswrapper[4842]: I0202 06:46:20.567321 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"449d5b62df4e1db49847e3d77dc4ca3c70b573290bb19f9c56f6057a404b92bc"} Feb 02 06:46:20 crc kubenswrapper[4842]: I0202 06:46:20.567352 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f9c0153fa6a4621977051bc7520582c8f6ddba3cefc69852a44383b1d1dd0b87"} Feb 02 06:46:20 crc kubenswrapper[4842]: I0202 06:46:20.567521 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:20 crc kubenswrapper[4842]: I0202 06:46:20.567545 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:20 crc kubenswrapper[4842]: I0202 06:46:20.567617 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:20 crc kubenswrapper[4842]: I0202 06:46:20.569793 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:20 crc kubenswrapper[4842]: I0202 06:46:20.569867 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:20 crc kubenswrapper[4842]: I0202 06:46:20.569898 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:20 crc kubenswrapper[4842]: I0202 06:46:20.569851 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:20 crc kubenswrapper[4842]: I0202 06:46:20.569947 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:20 crc kubenswrapper[4842]: I0202 06:46:20.569965 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:20 crc kubenswrapper[4842]: I0202 06:46:20.647502 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.289071 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.361502 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 22:19:00.118779106 +0000 UTC Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.577887 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d4f62f42ebc4afae27aa42966f04a4638ae38d0ef84da92504a0a303b56ffd69"} Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.577941 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"af6f71282f78a0334feb7e8e7cd6fd7b9c4adf33d862bda0a4a0006cdf1702e3"} Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.577982 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.578052 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.578182 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.579461 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.579523 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.579546 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.579930 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.579973 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.579988 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.580021 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.580066 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.580088 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.590767 4842 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.865300 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.867070 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.867140 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.867166 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.867201 4842 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.898267 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:46:22 crc kubenswrapper[4842]: I0202 06:46:22.362062 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 08:52:21.883507355 +0000 UTC Feb 02 06:46:22 crc kubenswrapper[4842]: I0202 06:46:22.580945 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:22 crc kubenswrapper[4842]: I0202 06:46:22.580983 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:22 crc kubenswrapper[4842]: I0202 06:46:22.580945 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:22 crc kubenswrapper[4842]: I0202 06:46:22.584012 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:22 crc kubenswrapper[4842]: I0202 06:46:22.584068 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:22 crc kubenswrapper[4842]: I0202 06:46:22.584088 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:22 crc kubenswrapper[4842]: I0202 06:46:22.584274 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:22 crc kubenswrapper[4842]: I0202 06:46:22.584317 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:22 crc kubenswrapper[4842]: I0202 06:46:22.584337 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:22 crc kubenswrapper[4842]: I0202 06:46:22.585831 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:22 crc kubenswrapper[4842]: I0202 06:46:22.585869 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:22 crc kubenswrapper[4842]: I0202 06:46:22.585889 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:23 crc kubenswrapper[4842]: I0202 06:46:23.362539 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 08:59:13.713176361 +0000 UTC Feb 02 06:46:23 crc kubenswrapper[4842]: I0202 06:46:23.362655 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:23 crc kubenswrapper[4842]: I0202 06:46:23.584610 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:23 crc kubenswrapper[4842]: I0202 06:46:23.586052 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:23 crc kubenswrapper[4842]: I0202 06:46:23.586107 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:23 crc kubenswrapper[4842]: I0202 06:46:23.586128 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:23 crc kubenswrapper[4842]: I0202 06:46:23.724625 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 02 06:46:23 crc kubenswrapper[4842]: I0202 06:46:23.724927 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:23 crc kubenswrapper[4842]: I0202 06:46:23.727078 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:23 crc kubenswrapper[4842]: I0202 06:46:23.727147 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:23 crc kubenswrapper[4842]: I0202 06:46:23.727167 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:23 crc kubenswrapper[4842]: I0202 06:46:23.990944 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 02 06:46:24 crc kubenswrapper[4842]: I0202 06:46:24.289958 4842 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 06:46:24 crc kubenswrapper[4842]: I0202 06:46:24.290079 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 06:46:24 crc kubenswrapper[4842]: I0202 06:46:24.362814 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 22:18:09.76516727 +0000 UTC Feb 02 06:46:24 crc kubenswrapper[4842]: I0202 06:46:24.587982 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:24 crc kubenswrapper[4842]: I0202 06:46:24.589189 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:24 crc kubenswrapper[4842]: I0202 06:46:24.589258 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:24 crc kubenswrapper[4842]: I0202 06:46:24.589273 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:25 crc kubenswrapper[4842]: I0202 06:46:25.363409 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 05:14:21.390360314 +0000 UTC Feb 02 06:46:25 crc kubenswrapper[4842]: E0202 06:46:25.563517 4842 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 06:46:25 crc kubenswrapper[4842]: I0202 06:46:25.604155 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:46:25 crc kubenswrapper[4842]: I0202 06:46:25.604587 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:25 crc kubenswrapper[4842]: I0202 06:46:25.606931 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:25 crc kubenswrapper[4842]: I0202 06:46:25.607046 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:25 crc kubenswrapper[4842]: I0202 06:46:25.607076 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:25 crc kubenswrapper[4842]: I0202 06:46:25.617690 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:46:26 crc kubenswrapper[4842]: I0202 06:46:26.165581 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 06:46:26 crc kubenswrapper[4842]: I0202 06:46:26.166013 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:26 crc kubenswrapper[4842]: I0202 06:46:26.168149 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:26 crc kubenswrapper[4842]: I0202 06:46:26.168255 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:26 crc kubenswrapper[4842]: I0202 06:46:26.168276 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:26 crc kubenswrapper[4842]: I0202 06:46:26.364457 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 09:10:18.684816104 +0000 UTC Feb 02 06:46:26 crc kubenswrapper[4842]: I0202 06:46:26.594753 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:26 crc kubenswrapper[4842]: I0202 06:46:26.596567 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:26 crc kubenswrapper[4842]: I0202 06:46:26.596624 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:26 crc kubenswrapper[4842]: I0202 06:46:26.596639 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:26 crc kubenswrapper[4842]: I0202 06:46:26.603342 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:46:27 crc kubenswrapper[4842]: I0202 06:46:27.365167 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 22:32:18.003117173 +0000 UTC Feb 02 06:46:27 crc kubenswrapper[4842]: I0202 06:46:27.598764 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:27 crc kubenswrapper[4842]: I0202 06:46:27.600538 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:27 crc kubenswrapper[4842]: I0202 06:46:27.600603 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:27 crc kubenswrapper[4842]: I0202 06:46:27.600621 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:28 crc kubenswrapper[4842]: I0202 06:46:28.332111 4842 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 02 06:46:28 crc kubenswrapper[4842]: I0202 06:46:28.332183 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 02 06:46:28 crc kubenswrapper[4842]: I0202 06:46:28.365406 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 16:13:36.127478912 +0000 UTC Feb 02 06:46:28 crc kubenswrapper[4842]: W0202 06:46:28.981210 4842 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 02 06:46:28 crc kubenswrapper[4842]: I0202 06:46:28.981416 4842 trace.go:236] Trace[1376167792]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 06:46:18.979) (total time: 10002ms): Feb 02 06:46:28 crc kubenswrapper[4842]: Trace[1376167792]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (06:46:28.981) Feb 02 06:46:28 crc kubenswrapper[4842]: Trace[1376167792]: [10.002268589s] [10.002268589s] END Feb 02 06:46:28 crc kubenswrapper[4842]: E0202 06:46:28.981464 4842 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 02 06:46:29 crc kubenswrapper[4842]: E0202 06:46:29.278172 4842 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.18905b0f6c071ff5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 06:46:15.351648245 +0000 UTC m=+0.728916197,LastTimestamp:2026-02-02 06:46:15.351648245 +0000 UTC m=+0.728916197,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 06:46:29 crc kubenswrapper[4842]: I0202 06:46:29.355147 4842 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 02 06:46:29 crc kubenswrapper[4842]: I0202 06:46:29.365503 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 04:49:47.403220307 +0000 UTC Feb 02 06:46:30 crc kubenswrapper[4842]: I0202 06:46:30.172028 4842 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 02 06:46:30 crc kubenswrapper[4842]: I0202 06:46:30.172114 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 02 06:46:30 crc kubenswrapper[4842]: I0202 06:46:30.184928 4842 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 02 06:46:30 crc kubenswrapper[4842]: I0202 06:46:30.185268 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 02 06:46:30 crc kubenswrapper[4842]: I0202 06:46:30.366172 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 06:59:22.645540078 +0000 UTC Feb 02 06:46:30 crc kubenswrapper[4842]: I0202 06:46:30.657033 4842 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]log ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]etcd ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/generic-apiserver-start-informers ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/priority-and-fairness-filter ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/start-apiextensions-informers ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/start-apiextensions-controllers ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/crd-informer-synced ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/start-system-namespaces-controller ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 02 06:46:30 crc kubenswrapper[4842]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 02 06:46:30 crc kubenswrapper[4842]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/bootstrap-controller ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/start-kube-aggregator-informers ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/apiservice-registration-controller ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/apiservice-discovery-controller ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]autoregister-completion ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/apiservice-openapi-controller ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 02 06:46:30 crc kubenswrapper[4842]: livez check failed Feb 02 06:46:30 crc kubenswrapper[4842]: I0202 06:46:30.657096 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:46:31 crc kubenswrapper[4842]: I0202 06:46:31.367739 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 15:34:57.233600788 +0000 UTC Feb 02 06:46:32 crc kubenswrapper[4842]: I0202 06:46:32.368420 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 02:07:29.078074974 +0000 UTC Feb 02 06:46:32 crc kubenswrapper[4842]: I0202 06:46:32.735467 4842 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 02 06:46:33 crc kubenswrapper[4842]: I0202 06:46:33.369432 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 11:38:52.913162196 +0000 UTC Feb 02 06:46:34 crc kubenswrapper[4842]: I0202 06:46:34.045086 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 02 06:46:34 crc kubenswrapper[4842]: I0202 06:46:34.045263 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:34 crc kubenswrapper[4842]: I0202 06:46:34.046725 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:34 crc kubenswrapper[4842]: I0202 06:46:34.046779 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:34 crc kubenswrapper[4842]: I0202 06:46:34.046792 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:34 crc kubenswrapper[4842]: I0202 06:46:34.070529 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 02 06:46:34 crc kubenswrapper[4842]: I0202 06:46:34.289847 4842 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 06:46:34 crc kubenswrapper[4842]: I0202 06:46:34.289981 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 06:46:34 crc kubenswrapper[4842]: I0202 06:46:34.369808 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 18:39:37.98942513 +0000 UTC Feb 02 06:46:34 crc kubenswrapper[4842]: I0202 06:46:34.616647 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:34 crc kubenswrapper[4842]: I0202 06:46:34.618123 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:34 crc kubenswrapper[4842]: I0202 06:46:34.618184 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:34 crc kubenswrapper[4842]: I0202 06:46:34.618210 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.184345 4842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.186526 4842 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.187981 4842 trace.go:236] Trace[1493835285]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 06:46:23.314) (total time: 11873ms): Feb 02 06:46:35 crc kubenswrapper[4842]: Trace[1493835285]: ---"Objects listed" error: 11873ms (06:46:35.187) Feb 02 06:46:35 crc kubenswrapper[4842]: Trace[1493835285]: [11.873418159s] [11.873418159s] END Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.188011 4842 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.188276 4842 trace.go:236] Trace[2102861008]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 06:46:24.179) (total time: 11009ms): Feb 02 06:46:35 crc kubenswrapper[4842]: Trace[2102861008]: ---"Objects listed" error: 11009ms (06:46:35.188) Feb 02 06:46:35 crc kubenswrapper[4842]: Trace[2102861008]: [11.009060498s] [11.009060498s] END Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.188294 4842 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.189856 4842 trace.go:236] Trace[1555826742]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 06:46:22.338) (total time: 12851ms): Feb 02 06:46:35 crc kubenswrapper[4842]: Trace[1555826742]: ---"Objects listed" error: 12850ms (06:46:35.189) Feb 02 06:46:35 crc kubenswrapper[4842]: Trace[1555826742]: [12.851054054s] [12.851054054s] END Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.189904 4842 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.191008 4842 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.200020 4842 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.200556 4842 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.202439 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.202507 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.202534 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.202572 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.202597 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:35Z","lastTransitionTime":"2026-02-02T06:46:35Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.225817 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.235848 4842 csr.go:261] certificate signing request csr-glph9 is approved, waiting to be issued Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.237792 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.237872 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.237895 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.237933 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.237970 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:35Z","lastTransitionTime":"2026-02-02T06:46:35Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.251886 4842 csr.go:257] certificate signing request csr-glph9 is issued Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.251955 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.256585 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.256620 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.256632 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.256663 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.256676 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:35Z","lastTransitionTime":"2026-02-02T06:46:35Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.269690 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.274384 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.274464 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.274484 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.274517 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.274537 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:35Z","lastTransitionTime":"2026-02-02T06:46:35Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.286868 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.292986 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.293069 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.293090 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.293124 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.293144 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:35Z","lastTransitionTime":"2026-02-02T06:46:35Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.307957 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.308119 4842 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.309956 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.310003 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.310016 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.310044 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.310058 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:35Z","lastTransitionTime":"2026-02-02T06:46:35Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.352402 4842 apiserver.go:52] "Watching apiserver" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.356344 4842 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.356772 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.357354 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.357384 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.357352 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.357458 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.357698 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.357771 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.357782 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.357492 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.358030 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.358671 4842 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.360414 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.361529 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.361830 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.361948 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.362093 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.363274 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.363498 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.363655 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.363835 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.370071 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 11:29:45.908475176 +0000 UTC Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.386931 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.387888 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.387891 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388005 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388033 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388060 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388084 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388118 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388142 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388165 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388189 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388272 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388295 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388318 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388340 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388363 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388436 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388459 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388484 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388545 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388571 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388595 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388620 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388660 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388682 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388696 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388704 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388771 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388798 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388822 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388848 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388871 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388894 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388943 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388959 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388952 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388965 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389128 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389160 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389203 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389279 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389305 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389331 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389355 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389378 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389402 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389425 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389447 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389468 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389502 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389525 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389548 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389572 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389594 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389616 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389641 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389665 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389688 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389711 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389733 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389754 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389776 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389801 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389827 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389849 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389872 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389893 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389917 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389938 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389959 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389982 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390007 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390030 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390084 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390106 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390128 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390152 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390176 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390198 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390240 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390263 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390285 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390310 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390333 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390356 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390378 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390425 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390447 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390468 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389177 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389278 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389471 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389531 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389635 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389781 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389876 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389952 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390079 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390121 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390198 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390490 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.391245 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.391474 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.391702 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.391734 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.391769 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.391806 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.392035 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.392081 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.392425 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.392451 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.392586 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.392579 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.392669 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.392781 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.392794 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.392838 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.393071 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.393093 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.393152 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.393724 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.393770 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.393803 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.393833 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.393563 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.394468 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.394705 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.394890 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.394985 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.395145 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.395176 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.395395 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.395562 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.395543 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.395950 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.396002 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.396092 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.396100 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.396319 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.396338 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.397520 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.397816 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.397994 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.398124 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.398192 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.398265 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.398295 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.398097 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.398618 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.398649 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.398645 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.398707 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.398745 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.398781 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.398805 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.399105 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.399106 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.399197 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.399247 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.399340 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.399480 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.403332 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.403402 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.403446 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.403480 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.403510 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.403540 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.403570 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.403597 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.403985 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.404014 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.404040 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.404064 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.404092 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.404118 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.404143 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.404170 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.404195 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411342 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411428 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411465 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411501 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411535 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411565 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411598 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411622 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411652 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411680 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411711 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411735 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411762 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411794 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411821 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411843 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411863 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411890 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411911 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411931 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411984 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412013 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412037 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412058 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412079 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412108 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412133 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412154 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412174 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412204 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412255 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412283 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412326 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412352 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412370 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412395 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412423 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412453 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412431 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412479 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412511 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.413165 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.413347 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.413668 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.413712 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.413903 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.413928 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.414137 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.414257 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.414383 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.414605 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.414644 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.414813 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.415023 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.415346 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.415732 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.416057 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.419693 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.420124 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.420174 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.420479 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.420554 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.420735 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.421203 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.422277 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.422765 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.424840 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.425818 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.425870 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.427689 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.430502 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.431012 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.431036 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.431350 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.431361 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.431750 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.432029 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.432325 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.432411 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.432477 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.432500 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.432661 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.432717 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.432758 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.432862 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.432881 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.433345 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.433556 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.433657 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.434076 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.434142 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.434606 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.435004 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.435058 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.435009 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.433908 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.435516 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.435621 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.436276 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.436342 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.436373 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.436394 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.436418 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.436635 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.436684 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.436725 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.436757 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.436781 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.437201 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.437281 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.437714 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.437831 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.437754 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.438157 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.438736 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.438852 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.439295 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.439503 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.439553 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.439563 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.439581 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.439612 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.439641 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.439663 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.439690 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.439714 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.440017 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.440387 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.440525 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.440565 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.440578 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.440605 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.440621 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:35Z","lastTransitionTime":"2026-02-02T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.440685 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.440727 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.440754 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.440895 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.441188 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.441754 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.441900 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.442028 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.442087 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.442177 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.442231 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.442261 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.442365 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.442631 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.442708 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.442745 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.442785 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.442950 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.443271 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.443363 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.443470 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.443377 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:46:35.943345468 +0000 UTC m=+21.320613380 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.443631 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.443711 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.443779 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.443853 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.443927 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.447278 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.448396 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.448496 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.448569 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.448655 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.448735 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.448820 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.448889 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.448951 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.449017 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.449083 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.449593 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.449717 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.451083 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.451782 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.451841 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.451874 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.451901 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.451932 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.451964 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.451995 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452057 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452087 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452114 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452207 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452257 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452286 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452314 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452470 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452493 4842 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452507 4842 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452522 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452542 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452555 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452570 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452583 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452601 4842 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452614 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452627 4842 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452644 4842 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452656 4842 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452677 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452690 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452706 4842 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452720 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452734 4842 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452747 4842 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452762 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452774 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452785 4842 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452800 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452812 4842 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452825 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452837 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452851 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452862 4842 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452874 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452886 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452900 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452911 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452924 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452936 4842 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452950 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452962 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452974 4842 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452990 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453001 4842 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453015 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453028 4842 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453043 4842 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453055 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453066 4842 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453078 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453092 4842 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453104 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453117 4842 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453129 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453144 4842 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453157 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453168 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453182 4842 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453194 4842 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453206 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453840 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453863 4842 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453876 4842 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453889 4842 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453902 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453920 4842 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453933 4842 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453945 4842 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453960 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453973 4842 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453986 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453998 4842 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454014 4842 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454026 4842 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454054 4842 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454067 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454084 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454098 4842 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454110 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454124 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454139 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454152 4842 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454165 4842 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454181 4842 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454195 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454209 4842 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454245 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454259 4842 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454271 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454284 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454297 4842 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454312 4842 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454324 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454337 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454354 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454366 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454378 4842 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454391 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454406 4842 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454418 4842 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454430 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454445 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454463 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454477 4842 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454492 4842 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454503 4842 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454518 4842 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454531 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454543 4842 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454560 4842 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454572 4842 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454583 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454595 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454610 4842 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454621 4842 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454632 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454642 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454657 4842 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454669 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454681 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454695 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454706 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454717 4842 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454728 4842 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454742 4842 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454794 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454812 4842 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454824 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454841 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454855 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454867 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454879 4842 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454896 4842 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454909 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454922 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454939 4842 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454953 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454968 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454983 4842 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.455002 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.455015 4842 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.448962 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.449289 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.449544 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.449558 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.450117 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.450505 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.450582 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.450654 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.450670 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.451393 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.451686 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454342 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454612 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454893 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454905 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.455665 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.456049 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.456505 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.457446 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.457912 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.458037 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.458253 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.458501 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.462374 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.467465 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.469115 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.470033 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.472828 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.472938 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.473721 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.474102 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.474280 4842 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.474337 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:35.974320804 +0000 UTC m=+21.351588716 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.474598 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.474836 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.475056 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.475751 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.476433 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.476867 4842 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.478710 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.455026 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.480739 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.481170 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.484286 4842 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.484307 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.484318 4842 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.484328 4842 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.484339 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.480044 4842 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.484350 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.480613 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.480298 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.480627 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.484378 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.480946 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.484461 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.481079 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.481354 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.481506 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.481844 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.481907 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.483976 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.484458 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:35.984435921 +0000 UTC m=+21.361704033 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.484545 4842 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.485316 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.485313 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.485835 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.486849 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.487497 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.488128 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.489177 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.489959 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.489980 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.489994 4842 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.490045 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:35.990029068 +0000 UTC m=+21.367296980 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.490290 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.490847 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.491193 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.492547 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.492717 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.492805 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.493189 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.494010 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.494034 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.494047 4842 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.494095 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:35.994083907 +0000 UTC m=+21.371351819 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.494196 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.497577 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.497792 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.499178 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.499800 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.502332 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.503178 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.503262 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.503637 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.504800 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.505451 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.505901 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.506850 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.507034 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.507468 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.509035 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.509539 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.509765 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.510145 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.510647 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.511258 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.512170 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.512722 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.513512 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.513992 4842 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.514090 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.515624 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.515738 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.516610 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.516985 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.518507 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.519565 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.520096 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.521086 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.521716 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.522510 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.523063 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.523385 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.524001 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.524606 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.525403 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.525911 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.526721 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.527403 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.528191 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.528655 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.529498 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.530161 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.530710 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.531506 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.533801 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.542726 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.544702 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.544733 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.544743 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.544756 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.544766 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:35Z","lastTransitionTime":"2026-02-02T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.553208 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.563643 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.573778 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.581765 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585208 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585255 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585307 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585318 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585328 4842 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585392 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585466 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585529 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585552 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585570 4842 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585584 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585596 4842 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585609 4842 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585620 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585633 4842 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585647 4842 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585662 4842 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585675 4842 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585687 4842 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585698 4842 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585710 4842 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585631 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585723 4842 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585772 4842 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585788 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585801 4842 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585815 4842 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585827 4842 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585839 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585851 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585865 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585879 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585893 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585905 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585916 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585929 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585940 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585953 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585965 4842 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585976 4842 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585988 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585999 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.586014 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.586025 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.586037 4842 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.586048 4842 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.586060 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.586071 4842 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.586085 4842 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.586098 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.590068 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.620844 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.622497 4842 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169" exitCode=255 Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.622547 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169"} Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.632488 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.643287 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.648633 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.648686 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.648703 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.648726 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.648743 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:35Z","lastTransitionTime":"2026-02-02T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.652745 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.653016 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.662312 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.670048 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.670068 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.678067 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.679843 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.684821 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 06:46:35 crc kubenswrapper[4842]: W0202 06:46:35.687294 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-853567595e3ffa664f5c5835b3b15e8d1a84a0bd0556c7e242ad01d82ea23b31 WatchSource:0}: Error finding container 853567595e3ffa664f5c5835b3b15e8d1a84a0bd0556c7e242ad01d82ea23b31: Status 404 returned error can't find the container with id 853567595e3ffa664f5c5835b3b15e8d1a84a0bd0556c7e242ad01d82ea23b31 Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.689480 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.698495 4842 scope.go:117] "RemoveContainer" containerID="628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.698919 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.705264 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.719576 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.734416 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.746751 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.759741 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.762449 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.762817 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.762830 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.762849 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.762878 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:35Z","lastTransitionTime":"2026-02-02T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.868541 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.868612 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.868763 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.868788 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.868806 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:35Z","lastTransitionTime":"2026-02-02T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.973591 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.973626 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.973637 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.973655 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.973668 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:35Z","lastTransitionTime":"2026-02-02T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.990198 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.990301 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.990345 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.990369 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:46:36.990340975 +0000 UTC m=+22.367608887 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.990425 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.990448 4842 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.990500 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:36.990484399 +0000 UTC m=+22.367752331 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.990545 4842 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.990572 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:36.990563051 +0000 UTC m=+22.367830983 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.990575 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.990589 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.990600 4842 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.990632 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:36.990625832 +0000 UTC m=+22.367893744 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.075724 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.075763 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.075772 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.075790 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.075799 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:36Z","lastTransitionTime":"2026-02-02T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.091150 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:36 crc kubenswrapper[4842]: E0202 06:46:36.091289 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 06:46:36 crc kubenswrapper[4842]: E0202 06:46:36.091311 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 06:46:36 crc kubenswrapper[4842]: E0202 06:46:36.091322 4842 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:36 crc kubenswrapper[4842]: E0202 06:46:36.091361 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:37.09134815 +0000 UTC m=+22.468616062 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.178063 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.178112 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.178124 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.178142 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.178154 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:36Z","lastTransitionTime":"2026-02-02T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.252782 4842 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-02 06:41:35 +0000 UTC, rotation deadline is 2026-11-21 08:22:26.985333574 +0000 UTC Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.252881 4842 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7009h35m50.732456533s for next certificate rotation Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.281055 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.281082 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.281092 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.281106 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.281115 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:36Z","lastTransitionTime":"2026-02-02T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.370256 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 02:15:19.517161165 +0000 UTC Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.383244 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.383278 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.383286 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.383301 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.383309 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:36Z","lastTransitionTime":"2026-02-02T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.485572 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.485625 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.485642 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.485667 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.485709 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:36Z","lastTransitionTime":"2026-02-02T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.550747 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-p5hqr"] Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.551102 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.552933 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-q2xjl"] Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.553270 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-q2xjl" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.553829 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.554190 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.554287 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.554688 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.554864 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.554882 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.554280 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.555651 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-njnbq"] Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.556074 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.556463 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.562052 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-j7rrg"] Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.563951 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.567010 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.567104 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.567161 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.567204 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.567200 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.567249 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.567448 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.568716 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.571202 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.571474 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.571664 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.572134 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.573685 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-gmkx9"] Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.574166 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.576701 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.576935 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.588404 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.588438 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.588449 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.588465 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.588477 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:36Z","lastTransitionTime":"2026-02-02T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.589288 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.595582 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.595639 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-run-ovn-kubernetes\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.595671 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-cni-bin\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.595703 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-system-cni-dir\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.595733 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-475lt\" (UniqueName: \"kubernetes.io/projected/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-kube-api-access-475lt\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.595762 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0cc6e593-198e-4709-9026-103f892be5ff-proxy-tls\") pod \"machine-config-daemon-p5hqr\" (UID: \"0cc6e593-198e-4709-9026-103f892be5ff\") " pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.595789 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-kubelet\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.595817 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.595863 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-os-release\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.595893 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0cc6e593-198e-4709-9026-103f892be5ff-rootfs\") pod \"machine-config-daemon-p5hqr\" (UID: \"0cc6e593-198e-4709-9026-103f892be5ff\") " pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.595920 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-openvswitch\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.595948 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-node-log\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.595976 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdmbp\" (UniqueName: \"kubernetes.io/projected/3f1e4f7c-d788-428b-bea6-e862234bfc59-kube-api-access-qdmbp\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596009 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqr8f\" (UniqueName: \"kubernetes.io/projected/0cc6e593-198e-4709-9026-103f892be5ff-kube-api-access-kqr8f\") pod \"machine-config-daemon-p5hqr\" (UID: \"0cc6e593-198e-4709-9026-103f892be5ff\") " pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596038 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovnkube-config\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596066 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovnkube-script-lib\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596095 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-var-lib-openvswitch\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596128 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-etc-openvswitch\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596165 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/110e0716-4e1c-49a1-acbb-016312fdb070-hosts-file\") pod \"node-resolver-q2xjl\" (UID: \"110e0716-4e1c-49a1-acbb-016312fdb070\") " pod="openshift-dns/node-resolver-q2xjl" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596191 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-cni-netd\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596233 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-slash\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596255 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-systemd\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596283 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-env-overrides\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596310 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-log-socket\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596337 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-systemd-units\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596358 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0cc6e593-198e-4709-9026-103f892be5ff-mcd-auth-proxy-config\") pod \"machine-config-daemon-p5hqr\" (UID: \"0cc6e593-198e-4709-9026-103f892be5ff\") " pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596377 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovn-node-metrics-cert\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596396 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-cni-binary-copy\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596416 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596434 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-ovn\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596452 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-run-netns\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596473 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-cnibin\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596500 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4jq8\" (UniqueName: \"kubernetes.io/projected/110e0716-4e1c-49a1-acbb-016312fdb070-kube-api-access-c4jq8\") pod \"node-resolver-q2xjl\" (UID: \"110e0716-4e1c-49a1-acbb-016312fdb070\") " pod="openshift-dns/node-resolver-q2xjl" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.604881 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.626729 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.627518 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.630039 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7"} Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.630807 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.631785 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c52035ea632a2c3e0a510756db259a4597bd6222111b1d7a316b030ee6ea0fe0"} Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.636424 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724"} Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.636456 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"62145383b727e93d1fe22a7dfa6b24e7fd0cba3a9abb9b3ecd18dc16c39a6543"} Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.638036 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.638598 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077"} Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.638651 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978"} Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.638667 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"853567595e3ffa664f5c5835b3b15e8d1a84a0bd0556c7e242ad01d82ea23b31"} Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.646560 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.663781 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.691585 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.691622 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.691634 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.691656 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.691670 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:36Z","lastTransitionTime":"2026-02-02T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697346 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovn-node-metrics-cert\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697407 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-os-release\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697456 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0cc6e593-198e-4709-9026-103f892be5ff-mcd-auth-proxy-config\") pod \"machine-config-daemon-p5hqr\" (UID: \"0cc6e593-198e-4709-9026-103f892be5ff\") " pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697492 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-cni-binary-copy\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697529 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697577 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-ovn\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697605 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-etc-kubernetes\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697646 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-run-netns\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697680 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-run-netns\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697710 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-run-multus-certs\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697739 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-cnibin\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697765 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4jq8\" (UniqueName: \"kubernetes.io/projected/110e0716-4e1c-49a1-acbb-016312fdb070-kube-api-access-c4jq8\") pod \"node-resolver-q2xjl\" (UID: \"110e0716-4e1c-49a1-acbb-016312fdb070\") " pod="openshift-dns/node-resolver-q2xjl" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697801 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-system-cni-dir\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697833 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-cni-binary-copy\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697860 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-multus-daemon-config\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697900 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697931 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-run-ovn-kubernetes\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697960 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-var-lib-cni-multus\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697996 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-multus-socket-dir-parent\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698041 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-cni-bin\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698070 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-cnibin\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698096 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-run-k8s-cni-cncf-io\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698124 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-hostroot\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698150 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-system-cni-dir\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698180 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-475lt\" (UniqueName: \"kubernetes.io/projected/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-kube-api-access-475lt\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698241 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-os-release\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698273 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0cc6e593-198e-4709-9026-103f892be5ff-rootfs\") pod \"machine-config-daemon-p5hqr\" (UID: \"0cc6e593-198e-4709-9026-103f892be5ff\") " pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698299 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0cc6e593-198e-4709-9026-103f892be5ff-proxy-tls\") pod \"machine-config-daemon-p5hqr\" (UID: \"0cc6e593-198e-4709-9026-103f892be5ff\") " pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698329 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-kubelet\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698359 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698400 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-multus-cni-dir\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698444 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqr8f\" (UniqueName: \"kubernetes.io/projected/0cc6e593-198e-4709-9026-103f892be5ff-kube-api-access-kqr8f\") pod \"machine-config-daemon-p5hqr\" (UID: \"0cc6e593-198e-4709-9026-103f892be5ff\") " pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698476 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-openvswitch\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698502 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-node-log\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698528 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdmbp\" (UniqueName: \"kubernetes.io/projected/3f1e4f7c-d788-428b-bea6-e862234bfc59-kube-api-access-qdmbp\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698601 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-var-lib-cni-bin\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698688 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0cc6e593-198e-4709-9026-103f892be5ff-mcd-auth-proxy-config\") pod \"machine-config-daemon-p5hqr\" (UID: \"0cc6e593-198e-4709-9026-103f892be5ff\") " pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698701 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-var-lib-openvswitch\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698739 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-etc-openvswitch\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698768 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovnkube-config\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698792 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovnkube-script-lib\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698816 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/110e0716-4e1c-49a1-acbb-016312fdb070-hosts-file\") pod \"node-resolver-q2xjl\" (UID: \"110e0716-4e1c-49a1-acbb-016312fdb070\") " pod="openshift-dns/node-resolver-q2xjl" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698832 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-cni-netd\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698862 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-systemd\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698880 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-slash\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698897 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-env-overrides\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698917 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-var-lib-kubelet\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698918 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698937 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-log-socket\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698959 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-multus-conf-dir\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.699037 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-systemd-units\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.699059 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4nf6\" (UniqueName: \"kubernetes.io/projected/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-kube-api-access-k4nf6\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.699170 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-run-ovn-kubernetes\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.699204 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-cni-binary-copy\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.699260 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-cni-bin\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.699299 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-var-lib-openvswitch\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.699331 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-etc-openvswitch\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.699577 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-cni-netd\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.699643 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-system-cni-dir\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.699959 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/110e0716-4e1c-49a1-acbb-016312fdb070-hosts-file\") pod \"node-resolver-q2xjl\" (UID: \"110e0716-4e1c-49a1-acbb-016312fdb070\") " pod="openshift-dns/node-resolver-q2xjl" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.700019 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-systemd\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.700085 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-log-socket\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.700131 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-slash\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.700132 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-os-release\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.700186 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0cc6e593-198e-4709-9026-103f892be5ff-rootfs\") pod \"machine-config-daemon-p5hqr\" (UID: \"0cc6e593-198e-4709-9026-103f892be5ff\") " pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.700379 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-env-overrides\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.700508 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-systemd-units\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.700569 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-openvswitch\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.700618 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-node-log\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697790 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-ovn\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.700711 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.700776 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-kubelet\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.700827 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-run-netns\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.700834 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.701087 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-cnibin\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.701556 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovnkube-config\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.702061 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.702230 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovnkube-script-lib\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.704514 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovn-node-metrics-cert\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.704882 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0cc6e593-198e-4709-9026-103f892be5ff-proxy-tls\") pod \"machine-config-daemon-p5hqr\" (UID: \"0cc6e593-198e-4709-9026-103f892be5ff\") " pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.726208 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdmbp\" (UniqueName: \"kubernetes.io/projected/3f1e4f7c-d788-428b-bea6-e862234bfc59-kube-api-access-qdmbp\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.726361 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-475lt\" (UniqueName: \"kubernetes.io/projected/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-kube-api-access-475lt\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.729082 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4jq8\" (UniqueName: \"kubernetes.io/projected/110e0716-4e1c-49a1-acbb-016312fdb070-kube-api-access-c4jq8\") pod \"node-resolver-q2xjl\" (UID: \"110e0716-4e1c-49a1-acbb-016312fdb070\") " pod="openshift-dns/node-resolver-q2xjl" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.729087 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqr8f\" (UniqueName: \"kubernetes.io/projected/0cc6e593-198e-4709-9026-103f892be5ff-kube-api-access-kqr8f\") pod \"machine-config-daemon-p5hqr\" (UID: \"0cc6e593-198e-4709-9026-103f892be5ff\") " pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.736527 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.759539 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.760070 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-q2xjl" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.780271 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.790878 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.793916 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.794737 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.794758 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.794767 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.794786 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.794796 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:36Z","lastTransitionTime":"2026-02-02T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.807770 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-var-lib-kubelet\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.807840 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-multus-conf-dir\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.807909 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4nf6\" (UniqueName: \"kubernetes.io/projected/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-kube-api-access-k4nf6\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.807945 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-os-release\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.807988 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-var-lib-kubelet\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808011 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-etc-kubernetes\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.807988 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-multus-conf-dir\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808057 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-run-netns\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808063 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-os-release\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808094 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-run-multus-certs\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808123 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-run-netns\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808099 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-etc-kubernetes\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808159 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-system-cni-dir\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808196 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-run-multus-certs\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808201 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-cni-binary-copy\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808261 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-system-cni-dir\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808263 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-multus-daemon-config\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808316 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-var-lib-cni-multus\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808462 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-multus-socket-dir-parent\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808510 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-cnibin\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808538 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-run-k8s-cni-cncf-io\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808562 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-hostroot\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808620 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-multus-cni-dir\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808671 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-var-lib-cni-bin\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808868 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-cni-binary-copy\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808905 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-multus-daemon-config\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808925 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-var-lib-cni-multus\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808951 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-hostroot\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808927 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-var-lib-cni-bin\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808972 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-multus-socket-dir-parent\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808985 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-cnibin\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.809015 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-run-k8s-cni-cncf-io\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.809140 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-multus-cni-dir\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: W0202 06:46:36.816027 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda55bc304_5cb2_4f7f_83b9_09d8188c73f2.slice/crio-c1ba4f530cfacf87b0882cd023c3634cd5e10ef021e7cba897bc2d2d470d5361 WatchSource:0}: Error finding container c1ba4f530cfacf87b0882cd023c3634cd5e10ef021e7cba897bc2d2d470d5361: Status 404 returned error can't find the container with id c1ba4f530cfacf87b0882cd023c3634cd5e10ef021e7cba897bc2d2d470d5361 Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.826512 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.834554 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4nf6\" (UniqueName: \"kubernetes.io/projected/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-kube-api-access-k4nf6\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.859541 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.887137 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.897488 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.897525 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.897534 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.897552 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.897561 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:36Z","lastTransitionTime":"2026-02-02T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.902870 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.920989 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.939530 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.952538 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.969514 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.997240 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.999306 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.999335 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.999345 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.999358 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.999368 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:36Z","lastTransitionTime":"2026-02-02T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.008502 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.015444 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.015560 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.015592 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:37 crc kubenswrapper[4842]: E0202 06:46:37.015619 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:46:39.015594403 +0000 UTC m=+24.392862315 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:46:37 crc kubenswrapper[4842]: E0202 06:46:37.015653 4842 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.015681 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:37 crc kubenswrapper[4842]: E0202 06:46:37.015694 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:39.015682975 +0000 UTC m=+24.392950887 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 06:46:37 crc kubenswrapper[4842]: E0202 06:46:37.015766 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 06:46:37 crc kubenswrapper[4842]: E0202 06:46:37.015779 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 06:46:37 crc kubenswrapper[4842]: E0202 06:46:37.015790 4842 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:37 crc kubenswrapper[4842]: E0202 06:46:37.015812 4842 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 06:46:37 crc kubenswrapper[4842]: E0202 06:46:37.015817 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:39.015810948 +0000 UTC m=+24.393078860 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:37 crc kubenswrapper[4842]: E0202 06:46:37.015845 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:39.015838369 +0000 UTC m=+24.393106271 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.020163 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.033245 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: W0202 06:46:37.041478 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0cc6e593_198e_4709_9026_103f892be5ff.slice/crio-2e6abd8f11c46b3911c9657f74809f3ba3dc9f664743cc6cb4f89a69d41d451c WatchSource:0}: Error finding container 2e6abd8f11c46b3911c9657f74809f3ba3dc9f664743cc6cb4f89a69d41d451c: Status 404 returned error can't find the container with id 2e6abd8f11c46b3911c9657f74809f3ba3dc9f664743cc6cb4f89a69d41d451c Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.102407 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.102467 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.102484 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.102509 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.102523 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:37Z","lastTransitionTime":"2026-02-02T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.108019 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gmkx9" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.116420 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:37 crc kubenswrapper[4842]: E0202 06:46:37.116624 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 06:46:37 crc kubenswrapper[4842]: E0202 06:46:37.116651 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 06:46:37 crc kubenswrapper[4842]: E0202 06:46:37.116664 4842 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:37 crc kubenswrapper[4842]: E0202 06:46:37.116728 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:39.11670953 +0000 UTC m=+24.493977442 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:37 crc kubenswrapper[4842]: W0202 06:46:37.119890 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1fd21cd_ea6a_44a0_b136_f338fc97cf18.slice/crio-26110d07d162878059c0c70c3a6ebcb6741fd944930e4d0cb51d902fcab16a2a WatchSource:0}: Error finding container 26110d07d162878059c0c70c3a6ebcb6741fd944930e4d0cb51d902fcab16a2a: Status 404 returned error can't find the container with id 26110d07d162878059c0c70c3a6ebcb6741fd944930e4d0cb51d902fcab16a2a Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.205532 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.205593 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.205609 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.205637 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.205654 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:37Z","lastTransitionTime":"2026-02-02T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.307995 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.308040 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.308052 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.308069 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.308081 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:37Z","lastTransitionTime":"2026-02-02T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.371051 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 13:20:58.772815456 +0000 UTC Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.416144 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.416205 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.416511 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.416540 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.416558 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:37Z","lastTransitionTime":"2026-02-02T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.432988 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.433007 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.433075 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:37 crc kubenswrapper[4842]: E0202 06:46:37.433397 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:46:37 crc kubenswrapper[4842]: E0202 06:46:37.433509 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:46:37 crc kubenswrapper[4842]: E0202 06:46:37.433654 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.437452 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.438377 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.519071 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.519106 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.519115 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.519128 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.519136 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:37Z","lastTransitionTime":"2026-02-02T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.621730 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.622190 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.622274 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.622346 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.622404 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:37Z","lastTransitionTime":"2026-02-02T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.642967 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gmkx9" event={"ID":"c1fd21cd-ea6a-44a0-b136-f338fc97cf18","Type":"ContainerStarted","Data":"8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.643067 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gmkx9" event={"ID":"c1fd21cd-ea6a-44a0-b136-f338fc97cf18","Type":"ContainerStarted","Data":"26110d07d162878059c0c70c3a6ebcb6741fd944930e4d0cb51d902fcab16a2a"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.644515 4842 generic.go:334] "Generic (PLEG): container finished" podID="a55bc304-5cb2-4f7f-83b9-09d8188c73f2" containerID="c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09" exitCode=0 Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.644560 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" event={"ID":"a55bc304-5cb2-4f7f-83b9-09d8188c73f2","Type":"ContainerDied","Data":"c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.644686 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" event={"ID":"a55bc304-5cb2-4f7f-83b9-09d8188c73f2","Type":"ContainerStarted","Data":"c1ba4f530cfacf87b0882cd023c3634cd5e10ef021e7cba897bc2d2d470d5361"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.646785 4842 generic.go:334] "Generic (PLEG): container finished" podID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerID="8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe" exitCode=0 Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.646891 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerDied","Data":"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.646952 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerStarted","Data":"ad55e0c8d5649109a4ec1a9a3e073a9a325c6f3565638121dd923673a8430c3b"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.650428 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-q2xjl" event={"ID":"110e0716-4e1c-49a1-acbb-016312fdb070","Type":"ContainerStarted","Data":"172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.650463 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-q2xjl" event={"ID":"110e0716-4e1c-49a1-acbb-016312fdb070","Type":"ContainerStarted","Data":"4eb673fa7258b1ad4a84348c36b407715714c46244de27067b0ca28eaf6a9837"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.652708 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.652766 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.652790 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"2e6abd8f11c46b3911c9657f74809f3ba3dc9f664743cc6cb4f89a69d41d451c"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.656782 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.674124 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.686996 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.702457 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.715564 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.724157 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.724205 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.724236 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.724257 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.724269 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:37Z","lastTransitionTime":"2026-02-02T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.731102 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.748762 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.776401 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.827616 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.827681 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.827695 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.827713 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.827722 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:37Z","lastTransitionTime":"2026-02-02T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.843853 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.859421 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.874308 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.888614 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.901030 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.916608 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.930331 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.930371 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.930380 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.930396 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.930406 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:37Z","lastTransitionTime":"2026-02-02T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.939666 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.954510 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.967628 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.983015 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.999516 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.013570 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.032862 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.032891 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.032900 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.032914 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.032924 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:38Z","lastTransitionTime":"2026-02-02T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.038618 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.060181 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.077668 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.093608 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.135081 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.135372 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.135464 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.135555 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.135634 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:38Z","lastTransitionTime":"2026-02-02T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.237985 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.238032 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.238045 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.238064 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.238082 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:38Z","lastTransitionTime":"2026-02-02T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.340753 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.340807 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.340819 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.340838 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.340851 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:38Z","lastTransitionTime":"2026-02-02T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.372304 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 04:49:46.385788002 +0000 UTC Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.444799 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.445094 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.445105 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.445126 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.445138 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:38Z","lastTransitionTime":"2026-02-02T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.547611 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.547648 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.547656 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.547670 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.547680 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:38Z","lastTransitionTime":"2026-02-02T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.650103 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.650146 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.650162 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.650179 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.650190 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:38Z","lastTransitionTime":"2026-02-02T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.663749 4842 generic.go:334] "Generic (PLEG): container finished" podID="a55bc304-5cb2-4f7f-83b9-09d8188c73f2" containerID="3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75" exitCode=0 Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.663830 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" event={"ID":"a55bc304-5cb2-4f7f-83b9-09d8188c73f2","Type":"ContainerDied","Data":"3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.673840 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerStarted","Data":"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.673906 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerStarted","Data":"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.673943 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerStarted","Data":"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.673957 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerStarted","Data":"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.673969 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerStarted","Data":"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.673980 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerStarted","Data":"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.676687 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.681891 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.699503 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.715735 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.742461 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.754280 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.754331 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.754348 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.754372 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.754390 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:38Z","lastTransitionTime":"2026-02-02T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.758667 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.779677 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.825284 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.853058 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.856315 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.856341 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.856351 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.856365 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.856375 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:38Z","lastTransitionTime":"2026-02-02T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.874418 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.897577 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.912536 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.924694 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.936794 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.947549 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.958458 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.959267 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.959304 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.959314 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.959328 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.959338 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:38Z","lastTransitionTime":"2026-02-02T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.970133 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.983856 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.996244 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.005898 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.016286 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.027483 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.036531 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.036681 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:39 crc kubenswrapper[4842]: E0202 06:46:39.036735 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:46:43.036704532 +0000 UTC m=+28.413972464 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.036804 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:39 crc kubenswrapper[4842]: E0202 06:46:39.036850 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 06:46:39 crc kubenswrapper[4842]: E0202 06:46:39.036873 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 06:46:39 crc kubenswrapper[4842]: E0202 06:46:39.036889 4842 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:39 crc kubenswrapper[4842]: E0202 06:46:39.036926 4842 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.036887 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:39 crc kubenswrapper[4842]: E0202 06:46:39.036986 4842 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 06:46:39 crc kubenswrapper[4842]: E0202 06:46:39.036937 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:43.036921667 +0000 UTC m=+28.414189589 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:39 crc kubenswrapper[4842]: E0202 06:46:39.037137 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:43.037112522 +0000 UTC m=+28.414380454 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 06:46:39 crc kubenswrapper[4842]: E0202 06:46:39.037156 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:43.037146782 +0000 UTC m=+28.414414704 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.039886 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.061611 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.061645 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.061656 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.061671 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.061685 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:39Z","lastTransitionTime":"2026-02-02T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.064448 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.084811 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.138649 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:39 crc kubenswrapper[4842]: E0202 06:46:39.138882 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 06:46:39 crc kubenswrapper[4842]: E0202 06:46:39.138918 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 06:46:39 crc kubenswrapper[4842]: E0202 06:46:39.138938 4842 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:39 crc kubenswrapper[4842]: E0202 06:46:39.139024 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:43.138999328 +0000 UTC m=+28.516267280 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.165320 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.165381 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.165399 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.165422 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.165441 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:39Z","lastTransitionTime":"2026-02-02T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.268856 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.268917 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.268940 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.268972 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.268991 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:39Z","lastTransitionTime":"2026-02-02T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.371520 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.371568 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.371582 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.371602 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.371615 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:39Z","lastTransitionTime":"2026-02-02T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.372699 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 05:23:00.118996957 +0000 UTC Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.433665 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.433741 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.433816 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:39 crc kubenswrapper[4842]: E0202 06:46:39.433914 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:46:39 crc kubenswrapper[4842]: E0202 06:46:39.434105 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:46:39 crc kubenswrapper[4842]: E0202 06:46:39.434254 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.474427 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.474479 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.474496 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.474522 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.474540 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:39Z","lastTransitionTime":"2026-02-02T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.577824 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.577888 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.577901 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.577926 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.577941 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:39Z","lastTransitionTime":"2026-02-02T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.680688 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.680742 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.680759 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.680819 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.680834 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:39Z","lastTransitionTime":"2026-02-02T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.683847 4842 generic.go:334] "Generic (PLEG): container finished" podID="a55bc304-5cb2-4f7f-83b9-09d8188c73f2" containerID="10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf" exitCode=0 Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.683967 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" event={"ID":"a55bc304-5cb2-4f7f-83b9-09d8188c73f2","Type":"ContainerDied","Data":"10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf"} Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.701959 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.718766 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.741132 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.770669 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.785978 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.786031 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.786046 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.786067 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.786082 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:39Z","lastTransitionTime":"2026-02-02T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.797498 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.822632 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.838375 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.857535 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.871504 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.888200 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.892430 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.892471 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.892482 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.892498 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.892509 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:39Z","lastTransitionTime":"2026-02-02T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.906795 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.922273 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.995419 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.995474 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.995489 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.995512 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.995529 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:39Z","lastTransitionTime":"2026-02-02T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.045729 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-ms7n2"] Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.046168 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ms7n2" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.048165 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.048448 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.049144 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.049670 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.065114 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.081402 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.097152 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.098483 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.098561 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.098582 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.098611 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.098631 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:40Z","lastTransitionTime":"2026-02-02T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.109120 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.125013 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.141200 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.153554 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.153682 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7tn4\" (UniqueName: \"kubernetes.io/projected/f026f084-0079-47a5-906c-14eb439eaa86-kube-api-access-h7tn4\") pod \"node-ca-ms7n2\" (UID: \"f026f084-0079-47a5-906c-14eb439eaa86\") " pod="openshift-image-registry/node-ca-ms7n2" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.153739 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f026f084-0079-47a5-906c-14eb439eaa86-serviceca\") pod \"node-ca-ms7n2\" (UID: \"f026f084-0079-47a5-906c-14eb439eaa86\") " pod="openshift-image-registry/node-ca-ms7n2" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.153823 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f026f084-0079-47a5-906c-14eb439eaa86-host\") pod \"node-ca-ms7n2\" (UID: \"f026f084-0079-47a5-906c-14eb439eaa86\") " pod="openshift-image-registry/node-ca-ms7n2" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.168397 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.184145 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.202283 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.202347 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.202365 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.202393 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.202413 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:40Z","lastTransitionTime":"2026-02-02T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.206965 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.226135 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.243810 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.255183 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7tn4\" (UniqueName: \"kubernetes.io/projected/f026f084-0079-47a5-906c-14eb439eaa86-kube-api-access-h7tn4\") pod \"node-ca-ms7n2\" (UID: \"f026f084-0079-47a5-906c-14eb439eaa86\") " pod="openshift-image-registry/node-ca-ms7n2" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.255306 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f026f084-0079-47a5-906c-14eb439eaa86-serviceca\") pod \"node-ca-ms7n2\" (UID: \"f026f084-0079-47a5-906c-14eb439eaa86\") " pod="openshift-image-registry/node-ca-ms7n2" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.255392 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f026f084-0079-47a5-906c-14eb439eaa86-host\") pod \"node-ca-ms7n2\" (UID: \"f026f084-0079-47a5-906c-14eb439eaa86\") " pod="openshift-image-registry/node-ca-ms7n2" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.255546 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f026f084-0079-47a5-906c-14eb439eaa86-host\") pod \"node-ca-ms7n2\" (UID: \"f026f084-0079-47a5-906c-14eb439eaa86\") " pod="openshift-image-registry/node-ca-ms7n2" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.257433 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f026f084-0079-47a5-906c-14eb439eaa86-serviceca\") pod \"node-ca-ms7n2\" (UID: \"f026f084-0079-47a5-906c-14eb439eaa86\") " pod="openshift-image-registry/node-ca-ms7n2" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.273753 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.278271 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7tn4\" (UniqueName: \"kubernetes.io/projected/f026f084-0079-47a5-906c-14eb439eaa86-kube-api-access-h7tn4\") pod \"node-ca-ms7n2\" (UID: \"f026f084-0079-47a5-906c-14eb439eaa86\") " pod="openshift-image-registry/node-ca-ms7n2" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.306676 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.306747 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.306773 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.306805 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.306828 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:40Z","lastTransitionTime":"2026-02-02T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.373162 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 09:48:05.169687077 +0000 UTC Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.394553 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ms7n2" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.412341 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.412887 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.413024 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.413149 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.413289 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:40Z","lastTransitionTime":"2026-02-02T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:40 crc kubenswrapper[4842]: W0202 06:46:40.416673 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf026f084_0079_47a5_906c_14eb439eaa86.slice/crio-dbca84cd798ea1e5b2203ed571b2cb6d7aceb6e504160af882b45da434623db6 WatchSource:0}: Error finding container dbca84cd798ea1e5b2203ed571b2cb6d7aceb6e504160af882b45da434623db6: Status 404 returned error can't find the container with id dbca84cd798ea1e5b2203ed571b2cb6d7aceb6e504160af882b45da434623db6 Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.516350 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.516416 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.516436 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.516464 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.516484 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:40Z","lastTransitionTime":"2026-02-02T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.620648 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.620709 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.620724 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.620750 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.620765 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:40Z","lastTransitionTime":"2026-02-02T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.692785 4842 generic.go:334] "Generic (PLEG): container finished" podID="a55bc304-5cb2-4f7f-83b9-09d8188c73f2" containerID="82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa" exitCode=0 Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.692879 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" event={"ID":"a55bc304-5cb2-4f7f-83b9-09d8188c73f2","Type":"ContainerDied","Data":"82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa"} Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.703799 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerStarted","Data":"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d"} Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.705727 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ms7n2" event={"ID":"f026f084-0079-47a5-906c-14eb439eaa86","Type":"ContainerStarted","Data":"9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86"} Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.705805 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ms7n2" event={"ID":"f026f084-0079-47a5-906c-14eb439eaa86","Type":"ContainerStarted","Data":"dbca84cd798ea1e5b2203ed571b2cb6d7aceb6e504160af882b45da434623db6"} Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.717468 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.726474 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.726525 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.726538 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.726560 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.726572 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:40Z","lastTransitionTime":"2026-02-02T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.737110 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.752650 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.781316 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.798935 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.814552 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.829337 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.829399 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.829418 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.829443 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.829466 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:40Z","lastTransitionTime":"2026-02-02T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.830017 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.841634 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.860448 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.872425 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.889510 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.901073 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.913974 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.927449 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.932366 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.932424 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.932438 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.932464 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.932482 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:40Z","lastTransitionTime":"2026-02-02T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.939968 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.954840 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.968844 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.981772 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.992190 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.002209 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.016044 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.034766 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.037339 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.037456 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.037531 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.037604 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.037675 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:41Z","lastTransitionTime":"2026-02-02T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.045964 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.087736 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.104612 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.117456 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.143508 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.143573 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.143594 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.143624 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.143645 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:41Z","lastTransitionTime":"2026-02-02T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.247185 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.247266 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.247281 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.247315 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.247336 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:41Z","lastTransitionTime":"2026-02-02T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.295647 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.301571 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.310461 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.313559 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.330334 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.344113 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.349926 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.349997 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.350018 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.350036 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.350049 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:41Z","lastTransitionTime":"2026-02-02T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.356559 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.373263 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.373524 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 06:37:13.234048348 +0000 UTC Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.392285 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.408840 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.425797 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.433561 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:41 crc kubenswrapper[4842]: E0202 06:46:41.433717 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.434103 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:41 crc kubenswrapper[4842]: E0202 06:46:41.434159 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.434280 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:41 crc kubenswrapper[4842]: E0202 06:46:41.434337 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.444267 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.453069 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.453112 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.453129 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.453151 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.453167 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:41Z","lastTransitionTime":"2026-02-02T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.468054 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.509150 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.528732 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.554171 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.560602 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.560727 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.560741 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.560769 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.560781 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:41Z","lastTransitionTime":"2026-02-02T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.571572 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.586772 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.602265 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.626649 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.644363 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.659673 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.664130 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.664181 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.664194 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.664233 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.664246 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:41Z","lastTransitionTime":"2026-02-02T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.676086 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.691498 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.711697 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.714342 4842 generic.go:334] "Generic (PLEG): container finished" podID="a55bc304-5cb2-4f7f-83b9-09d8188c73f2" containerID="bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017" exitCode=0 Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.714443 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" event={"ID":"a55bc304-5cb2-4f7f-83b9-09d8188c73f2","Type":"ContainerDied","Data":"bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017"} Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.730766 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.761721 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.772344 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.772423 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.772441 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.772466 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.772483 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:41Z","lastTransitionTime":"2026-02-02T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.794105 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.818328 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.838175 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.855287 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.872414 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.876752 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.876790 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.876801 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.876850 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.876865 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:41Z","lastTransitionTime":"2026-02-02T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.895163 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.922016 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.946185 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.968630 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.979590 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.979662 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.979682 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.979709 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.979726 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:41Z","lastTransitionTime":"2026-02-02T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.000722 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.025135 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.065330 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.083910 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.083977 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.083996 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.084027 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.084051 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:42Z","lastTransitionTime":"2026-02-02T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.102611 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.124842 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.137963 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.150736 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.165211 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.187806 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.187880 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.187900 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.187932 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.187951 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:42Z","lastTransitionTime":"2026-02-02T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.291297 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.291369 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.291387 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.291414 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.291435 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:42Z","lastTransitionTime":"2026-02-02T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.374413 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 04:32:03.202381396 +0000 UTC Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.394436 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.394579 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.394697 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.394780 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.394854 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:42Z","lastTransitionTime":"2026-02-02T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.498461 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.498523 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.498539 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.498565 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.498582 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:42Z","lastTransitionTime":"2026-02-02T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.601898 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.602542 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.602840 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.603013 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.603213 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:42Z","lastTransitionTime":"2026-02-02T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.706594 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.706658 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.706680 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.706708 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.706727 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:42Z","lastTransitionTime":"2026-02-02T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.723388 4842 generic.go:334] "Generic (PLEG): container finished" podID="a55bc304-5cb2-4f7f-83b9-09d8188c73f2" containerID="34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5" exitCode=0 Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.723481 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" event={"ID":"a55bc304-5cb2-4f7f-83b9-09d8188c73f2","Type":"ContainerDied","Data":"34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5"} Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.742651 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.756371 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.778468 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.799535 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.814731 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.814793 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.814811 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.814836 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.814855 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:42Z","lastTransitionTime":"2026-02-02T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.831408 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.854191 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.868166 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.890638 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.914561 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.922851 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.922896 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.922908 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.922928 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.922943 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:42Z","lastTransitionTime":"2026-02-02T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.932151 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.944873 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.958142 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.970978 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.987969 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.025402 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.025440 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.025449 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.025470 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.025480 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:43Z","lastTransitionTime":"2026-02-02T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.089481 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.089586 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.089613 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.089641 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:43 crc kubenswrapper[4842]: E0202 06:46:43.089752 4842 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 06:46:43 crc kubenswrapper[4842]: E0202 06:46:43.089805 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:51.089791852 +0000 UTC m=+36.467059764 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 06:46:43 crc kubenswrapper[4842]: E0202 06:46:43.089883 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 06:46:43 crc kubenswrapper[4842]: E0202 06:46:43.089911 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 06:46:43 crc kubenswrapper[4842]: E0202 06:46:43.089925 4842 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:43 crc kubenswrapper[4842]: E0202 06:46:43.089981 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:51.089960406 +0000 UTC m=+36.467228318 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:43 crc kubenswrapper[4842]: E0202 06:46:43.090103 4842 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 06:46:43 crc kubenswrapper[4842]: E0202 06:46:43.090053 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:46:51.090046488 +0000 UTC m=+36.467314400 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:46:43 crc kubenswrapper[4842]: E0202 06:46:43.090782 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:51.090762556 +0000 UTC m=+36.468030488 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.127927 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.127958 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.127966 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.127981 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.127992 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:43Z","lastTransitionTime":"2026-02-02T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.190430 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:43 crc kubenswrapper[4842]: E0202 06:46:43.190642 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 06:46:43 crc kubenswrapper[4842]: E0202 06:46:43.190662 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 06:46:43 crc kubenswrapper[4842]: E0202 06:46:43.190673 4842 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:43 crc kubenswrapper[4842]: E0202 06:46:43.190732 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:51.190717035 +0000 UTC m=+36.567984947 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.231016 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.231064 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.231073 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.231088 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.231098 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:43Z","lastTransitionTime":"2026-02-02T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.334193 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.334294 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.334315 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.334351 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.334373 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:43Z","lastTransitionTime":"2026-02-02T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.378780 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 17:00:45.077555876 +0000 UTC Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.436383 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:43 crc kubenswrapper[4842]: E0202 06:46:43.436536 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.436546 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:43 crc kubenswrapper[4842]: E0202 06:46:43.436635 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.436810 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:43 crc kubenswrapper[4842]: E0202 06:46:43.436932 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.438001 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.438042 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.438054 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.438071 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.438084 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:43Z","lastTransitionTime":"2026-02-02T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.541314 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.541358 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.541369 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.541385 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.541397 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:43Z","lastTransitionTime":"2026-02-02T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.644587 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.644639 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.644656 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.644678 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.644695 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:43Z","lastTransitionTime":"2026-02-02T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.734369 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" event={"ID":"a55bc304-5cb2-4f7f-83b9-09d8188c73f2","Type":"ContainerStarted","Data":"22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87"} Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.742750 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerStarted","Data":"2b05a6c8e30bfc10a9d0ffd9524ead56223a744b2799856c542758af23d773e5"} Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.743125 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.748373 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.748451 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.748469 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.748490 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.748504 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:43Z","lastTransitionTime":"2026-02-02T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.764806 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:43Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.778418 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.783900 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:43Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.801861 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:43Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.822749 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:43Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.845012 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:43Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.851485 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.851542 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.851560 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.851584 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.851603 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:43Z","lastTransitionTime":"2026-02-02T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.866774 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:43Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.888962 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:43Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.906462 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:43Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.922905 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:43Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.940041 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:43Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.954705 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.954925 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.955007 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.955096 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.955246 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:43Z","lastTransitionTime":"2026-02-02T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.955368 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:43Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.971114 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:43Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.985687 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:43Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.000443 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:43Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.022455 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.045593 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.058422 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.058619 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.058754 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.058862 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.058945 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:44Z","lastTransitionTime":"2026-02-02T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.069588 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.091643 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.110606 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.132555 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.153988 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.162130 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.162213 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.162275 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.162311 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.162336 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:44Z","lastTransitionTime":"2026-02-02T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.175745 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.193413 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.217950 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b05a6c8e30bfc10a9d0ffd9524ead56223a744b2799856c542758af23d773e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.238364 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.264099 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.265003 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.265057 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.265076 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.265103 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.265123 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:44Z","lastTransitionTime":"2026-02-02T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.283668 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.300459 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.368382 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.368754 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.368949 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.369110 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.369479 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:44Z","lastTransitionTime":"2026-02-02T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.379847 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 12:26:17.431608144 +0000 UTC Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.473792 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.473858 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.473895 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.473931 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.473956 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:44Z","lastTransitionTime":"2026-02-02T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.577414 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.577477 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.577500 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.577537 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.577562 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:44Z","lastTransitionTime":"2026-02-02T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.680469 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.680549 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.680574 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.680606 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.680623 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:44Z","lastTransitionTime":"2026-02-02T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.746959 4842 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.747639 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.815921 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.815998 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.816017 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.816051 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.816076 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:44Z","lastTransitionTime":"2026-02-02T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.820526 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.845599 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.868093 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.892703 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.920078 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.920597 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.920830 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.920268 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.921322 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.921563 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:44Z","lastTransitionTime":"2026-02-02T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.949063 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.967148 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.983487 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.002800 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.025119 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.025190 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.025210 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.025279 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.025347 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:45Z","lastTransitionTime":"2026-02-02T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.028488 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.055356 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.081720 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.119633 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b05a6c8e30bfc10a9d0ffd9524ead56223a744b2799856c542758af23d773e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.127873 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.127926 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.127947 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.127976 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.127997 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:45Z","lastTransitionTime":"2026-02-02T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.153347 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.178475 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.187493 4842 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.230839 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.230877 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.230889 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.230909 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.230923 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:45Z","lastTransitionTime":"2026-02-02T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.333096 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.333450 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.333463 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.333482 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.333495 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:45Z","lastTransitionTime":"2026-02-02T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.380931 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 19:18:01.607470475 +0000 UTC Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.432540 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.432653 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.432796 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:45 crc kubenswrapper[4842]: E0202 06:46:45.432797 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:46:45 crc kubenswrapper[4842]: E0202 06:46:45.432872 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:46:45 crc kubenswrapper[4842]: E0202 06:46:45.432984 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.436095 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.436148 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.436169 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.436195 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.436237 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:45Z","lastTransitionTime":"2026-02-02T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.454417 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.474761 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.488793 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.502920 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.524716 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.540146 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.540368 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.540182 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.540432 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.540618 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.540647 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:45Z","lastTransitionTime":"2026-02-02T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.545845 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.545936 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.546017 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.546172 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.546335 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:45Z","lastTransitionTime":"2026-02-02T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.553507 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: E0202 06:46:45.564150 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.564438 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.570656 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.570803 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.570865 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.570953 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.571028 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:45Z","lastTransitionTime":"2026-02-02T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.578685 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: E0202 06:46:45.583961 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.587728 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.587759 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.587773 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.587788 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.587798 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:45Z","lastTransitionTime":"2026-02-02T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.598201 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: E0202 06:46:45.604062 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.607935 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.607964 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.607975 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.607989 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.607999 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:45Z","lastTransitionTime":"2026-02-02T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.614706 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: E0202 06:46:45.620450 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.624549 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.624590 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.624603 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.624622 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.624634 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:45Z","lastTransitionTime":"2026-02-02T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.630251 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: E0202 06:46:45.642570 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: E0202 06:46:45.642966 4842 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.645141 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.645187 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.645207 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.645272 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.645295 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:45Z","lastTransitionTime":"2026-02-02T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.653545 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b05a6c8e30bfc10a9d0ffd9524ead56223a744b2799856c542758af23d773e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.674635 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.748655 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.748715 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.748764 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.748793 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.748818 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:45Z","lastTransitionTime":"2026-02-02T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.748666 4842 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.858767 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.858835 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.858854 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.858880 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.858904 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:45Z","lastTransitionTime":"2026-02-02T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.968419 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.968486 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.968506 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.968535 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.968553 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:45Z","lastTransitionTime":"2026-02-02T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.071475 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.071535 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.071551 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.071576 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.071593 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:46Z","lastTransitionTime":"2026-02-02T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.173704 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.173773 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.173794 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.173819 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.173839 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:46Z","lastTransitionTime":"2026-02-02T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.276368 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.276435 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.276452 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.276479 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.276498 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:46Z","lastTransitionTime":"2026-02-02T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.379015 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.379091 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.379111 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.379133 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.379150 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:46Z","lastTransitionTime":"2026-02-02T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.381118 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 14:26:42.595104879 +0000 UTC Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.481775 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.481841 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.481862 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.481888 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.481906 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:46Z","lastTransitionTime":"2026-02-02T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.584363 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.584426 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.584444 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.584469 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.584489 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:46Z","lastTransitionTime":"2026-02-02T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.687629 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.687702 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.687722 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.687749 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.687768 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:46Z","lastTransitionTime":"2026-02-02T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.755178 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovnkube-controller/0.log" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.759381 4842 generic.go:334] "Generic (PLEG): container finished" podID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerID="2b05a6c8e30bfc10a9d0ffd9524ead56223a744b2799856c542758af23d773e5" exitCode=1 Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.759442 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerDied","Data":"2b05a6c8e30bfc10a9d0ffd9524ead56223a744b2799856c542758af23d773e5"} Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.760606 4842 scope.go:117] "RemoveContainer" containerID="2b05a6c8e30bfc10a9d0ffd9524ead56223a744b2799856c542758af23d773e5" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.783855 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:46Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.790609 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.790671 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.790695 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.790723 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.790745 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:46Z","lastTransitionTime":"2026-02-02T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.805127 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:46Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.830295 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:46Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.852575 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:46Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.870442 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:46Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.882980 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:46Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.892852 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.892912 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.892930 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.893004 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.893023 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:46Z","lastTransitionTime":"2026-02-02T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.900918 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:46Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.917543 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:46Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.933865 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:46Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.953848 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:46Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.987755 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b05a6c8e30bfc10a9d0ffd9524ead56223a744b2799856c542758af23d773e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b05a6c8e30bfc10a9d0ffd9524ead56223a744b2799856c542758af23d773e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"1 6111 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 06:46:45.915707 6111 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 06:46:45.916042 6111 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 06:46:45.916095 6111 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 06:46:45.916105 6111 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 06:46:45.916143 6111 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 06:46:45.916155 6111 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 06:46:45.916170 6111 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 06:46:45.916188 6111 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 06:46:45.916197 6111 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 06:46:45.916204 6111 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 06:46:45.916266 6111 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 06:46:45.916303 6111 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 06:46:45.916310 6111 factory.go:656] Stopping watch factory\\\\nI0202 06:46:45.916334 6111 ovnkube.go:599] Stopped ovnkube\\\\nI0202 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:46Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.995339 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.995380 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.995391 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.995412 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.995424 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:46Z","lastTransitionTime":"2026-02-02T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.015943 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.036143 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.056847 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.097631 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.097675 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.097690 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.097708 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.097723 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:47Z","lastTransitionTime":"2026-02-02T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.201070 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.201125 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.201144 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.201170 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.201189 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:47Z","lastTransitionTime":"2026-02-02T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.304040 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.304102 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.304122 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.304146 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.304164 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:47Z","lastTransitionTime":"2026-02-02T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.381528 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 08:04:09.650156536 +0000 UTC Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.406821 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.406855 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.406865 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.406881 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.406893 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:47Z","lastTransitionTime":"2026-02-02T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.432453 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:47 crc kubenswrapper[4842]: E0202 06:46:47.432586 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.432691 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:47 crc kubenswrapper[4842]: E0202 06:46:47.432826 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.433045 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:47 crc kubenswrapper[4842]: E0202 06:46:47.433154 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.509207 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.509262 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.509270 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.509282 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.509291 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:47Z","lastTransitionTime":"2026-02-02T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.611473 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.611498 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.611506 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.611518 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.611527 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:47Z","lastTransitionTime":"2026-02-02T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.713493 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.713530 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.713539 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.713552 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.713559 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:47Z","lastTransitionTime":"2026-02-02T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.764881 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovnkube-controller/0.log" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.767712 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerStarted","Data":"be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c"} Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.767911 4842 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.786099 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.799646 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.816189 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.816229 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.816238 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.816251 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.816262 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:47Z","lastTransitionTime":"2026-02-02T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.821448 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b05a6c8e30bfc10a9d0ffd9524ead56223a744b2799856c542758af23d773e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"1 6111 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 06:46:45.915707 6111 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 06:46:45.916042 6111 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 06:46:45.916095 6111 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 06:46:45.916105 6111 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 06:46:45.916143 6111 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 06:46:45.916155 6111 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 06:46:45.916170 6111 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 06:46:45.916188 6111 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 06:46:45.916197 6111 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 06:46:45.916204 6111 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 06:46:45.916266 6111 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 06:46:45.916303 6111 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 06:46:45.916310 6111 factory.go:656] Stopping watch factory\\\\nI0202 06:46:45.916334 6111 ovnkube.go:599] Stopped ovnkube\\\\nI0202 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.836312 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.854248 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.871002 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.883016 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.915243 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.918021 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.918044 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.918052 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.918064 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.918082 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:47Z","lastTransitionTime":"2026-02-02T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.938504 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.959503 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.971204 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.980919 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.992966 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.004515 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.020160 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.020209 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.020244 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.020266 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.020280 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:48Z","lastTransitionTime":"2026-02-02T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.122986 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.123029 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.123038 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.123054 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.123064 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:48Z","lastTransitionTime":"2026-02-02T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.225952 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.226023 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.226065 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.226095 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.226121 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:48Z","lastTransitionTime":"2026-02-02T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.329305 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.329345 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.329355 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.329369 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.329378 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:48Z","lastTransitionTime":"2026-02-02T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.335469 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.357446 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.378949 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.382268 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 21:49:05.173493738 +0000 UTC Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.399580 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.419738 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.432703 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.432762 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.432779 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.432806 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.432824 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:48Z","lastTransitionTime":"2026-02-02T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.443143 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.460249 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.476829 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.498328 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.518911 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.535516 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.535567 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.535584 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.535607 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.535621 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:48Z","lastTransitionTime":"2026-02-02T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.545367 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.564328 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.595642 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b05a6c8e30bfc10a9d0ffd9524ead56223a744b2799856c542758af23d773e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"1 6111 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 06:46:45.915707 6111 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 06:46:45.916042 6111 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 06:46:45.916095 6111 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 06:46:45.916105 6111 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 06:46:45.916143 6111 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 06:46:45.916155 6111 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 06:46:45.916170 6111 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 06:46:45.916188 6111 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 06:46:45.916197 6111 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 06:46:45.916204 6111 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 06:46:45.916266 6111 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 06:46:45.916303 6111 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 06:46:45.916310 6111 factory.go:656] Stopping watch factory\\\\nI0202 06:46:45.916334 6111 ovnkube.go:599] Stopped ovnkube\\\\nI0202 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.618473 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.638428 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.638468 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.638483 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.638507 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.638390 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.638522 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:48Z","lastTransitionTime":"2026-02-02T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.741605 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.741690 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.741715 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.741748 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.741774 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:48Z","lastTransitionTime":"2026-02-02T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.774874 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovnkube-controller/1.log" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.775909 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovnkube-controller/0.log" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.780653 4842 generic.go:334] "Generic (PLEG): container finished" podID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerID="be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c" exitCode=1 Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.780745 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerDied","Data":"be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c"} Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.780846 4842 scope.go:117] "RemoveContainer" containerID="2b05a6c8e30bfc10a9d0ffd9524ead56223a744b2799856c542758af23d773e5" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.781672 4842 scope.go:117] "RemoveContainer" containerID="be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c" Feb 02 06:46:48 crc kubenswrapper[4842]: E0202 06:46:48.781928 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\"" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.805768 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.822650 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.838746 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.849410 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.849569 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.849649 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.849753 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.849975 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:48Z","lastTransitionTime":"2026-02-02T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.856828 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.876730 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.892985 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.908946 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.920847 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.933713 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.951690 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.952516 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.952606 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.952673 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.952735 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.952913 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:48Z","lastTransitionTime":"2026-02-02T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.969106 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.984746 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.002823 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b05a6c8e30bfc10a9d0ffd9524ead56223a744b2799856c542758af23d773e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"1 6111 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 06:46:45.915707 6111 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 06:46:45.916042 6111 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 06:46:45.916095 6111 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 06:46:45.916105 6111 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 06:46:45.916143 6111 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 06:46:45.916155 6111 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 06:46:45.916170 6111 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 06:46:45.916188 6111 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 06:46:45.916197 6111 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 06:46:45.916204 6111 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 06:46:45.916266 6111 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 06:46:45.916303 6111 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 06:46:45.916310 6111 factory.go:656] Stopping watch factory\\\\nI0202 06:46:45.916334 6111 ovnkube.go:599] Stopped ovnkube\\\\nI0202 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:46:47Z\\\",\\\"message\\\":\\\"-lifecycle-manager/packageserver-service_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/packageserver-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0202 06:46:47.800293 6264 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 06:46:47.800304 6264 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.025307 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:49Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.056835 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.057061 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.057212 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.057401 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.057557 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:49Z","lastTransitionTime":"2026-02-02T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.160611 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.160664 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.160683 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.160709 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.160732 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:49Z","lastTransitionTime":"2026-02-02T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.264192 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.264264 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.264279 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.264304 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.264322 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:49Z","lastTransitionTime":"2026-02-02T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.367131 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.367525 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.367997 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.368357 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.368484 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:49Z","lastTransitionTime":"2026-02-02T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.383759 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 15:44:27.995604323 +0000 UTC Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.433371 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.433420 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:49 crc kubenswrapper[4842]: E0202 06:46:49.433538 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.433383 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:49 crc kubenswrapper[4842]: E0202 06:46:49.433693 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:46:49 crc kubenswrapper[4842]: E0202 06:46:49.433816 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.471107 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.471454 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.471592 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.471738 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.471854 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:49Z","lastTransitionTime":"2026-02-02T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.573931 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.574279 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.574492 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.574641 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.574863 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:49Z","lastTransitionTime":"2026-02-02T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.677951 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.678010 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.678027 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.678051 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.678068 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:49Z","lastTransitionTime":"2026-02-02T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.781177 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.781572 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.781706 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.781835 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.781974 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:49Z","lastTransitionTime":"2026-02-02T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.787830 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovnkube-controller/1.log" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.794193 4842 scope.go:117] "RemoveContainer" containerID="be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c" Feb 02 06:46:49 crc kubenswrapper[4842]: E0202 06:46:49.794524 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\"" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.816906 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:49Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.835993 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:49Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.853484 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:49Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.872185 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:49Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.886007 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.886064 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.886086 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.886116 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.886139 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:49Z","lastTransitionTime":"2026-02-02T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.893142 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:49Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.913471 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:49Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.929104 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:49Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.944807 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:49Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.959046 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:49Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.981170 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:49Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.989780 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.989861 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.989890 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.989922 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.989946 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:49Z","lastTransitionTime":"2026-02-02T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.995273 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm"] Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.996096 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.998197 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.998395 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.001945 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:49Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.024178 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.053345 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:46:47Z\\\",\\\"message\\\":\\\"-lifecycle-manager/packageserver-service_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/packageserver-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0202 06:46:47.800293 6264 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 06:46:47.800304 6264 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.068382 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gkdfm\" (UID: \"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.068591 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gkdfm\" (UID: \"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.068720 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gkdfm\" (UID: \"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.068815 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wlzs\" (UniqueName: \"kubernetes.io/projected/cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3-kube-api-access-8wlzs\") pod \"ovnkube-control-plane-749d76644c-gkdfm\" (UID: \"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.075433 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.090566 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.092463 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.092515 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.092558 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.092581 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.092599 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:50Z","lastTransitionTime":"2026-02-02T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.110433 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.127612 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.139908 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.155395 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.169492 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gkdfm\" (UID: \"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.169558 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gkdfm\" (UID: \"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.169610 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gkdfm\" (UID: \"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.169655 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wlzs\" (UniqueName: \"kubernetes.io/projected/cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3-kube-api-access-8wlzs\") pod \"ovnkube-control-plane-749d76644c-gkdfm\" (UID: \"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.170932 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gkdfm\" (UID: \"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.171491 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gkdfm\" (UID: \"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.181804 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.181992 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gkdfm\" (UID: \"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.195333 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.195393 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.195413 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.195437 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.195463 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:50Z","lastTransitionTime":"2026-02-02T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.203280 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wlzs\" (UniqueName: \"kubernetes.io/projected/cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3-kube-api-access-8wlzs\") pod \"ovnkube-control-plane-749d76644c-gkdfm\" (UID: \"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.209077 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.228708 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.257373 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:46:47Z\\\",\\\"message\\\":\\\"-lifecycle-manager/packageserver-service_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/packageserver-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0202 06:46:47.800293 6264 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 06:46:47.800304 6264 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.283051 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.298090 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.298138 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.298158 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.298180 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.298196 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:50Z","lastTransitionTime":"2026-02-02T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.300527 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.317702 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.317740 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.336862 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.354817 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.368278 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.384096 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 02:05:52.911008211 +0000 UTC Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.401545 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.401596 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.401614 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.401635 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.401650 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:50Z","lastTransitionTime":"2026-02-02T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.503967 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.503992 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.504018 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.504032 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.504041 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:50Z","lastTransitionTime":"2026-02-02T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.606523 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.606569 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.606587 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.606611 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.606629 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:50Z","lastTransitionTime":"2026-02-02T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.711624 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.711689 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.711708 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.711734 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.711753 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:50Z","lastTransitionTime":"2026-02-02T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.800408 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" event={"ID":"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3","Type":"ContainerStarted","Data":"73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37"} Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.800462 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" event={"ID":"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3","Type":"ContainerStarted","Data":"2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031"} Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.800478 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" event={"ID":"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3","Type":"ContainerStarted","Data":"30dc0e188446265183d7471d7abe21748afca9fd3abb7dc4c4d1557bc2fc214d"} Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.813946 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.814007 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.814020 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.814042 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.814058 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:50Z","lastTransitionTime":"2026-02-02T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.828337 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.844206 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.859289 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.875706 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.890379 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.904428 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.912515 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.916988 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.917042 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.917064 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.917092 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.917111 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:50Z","lastTransitionTime":"2026-02-02T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.922439 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.941647 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.960485 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.981041 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.009380 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:46:47Z\\\",\\\"message\\\":\\\"-lifecycle-manager/packageserver-service_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/packageserver-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0202 06:46:47.800293 6264 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 06:46:47.800304 6264 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.020004 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.020046 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.020055 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.020072 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.020082 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:51Z","lastTransitionTime":"2026-02-02T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.032389 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.044325 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.065510 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.123692 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.123762 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.123779 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.123808 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.123827 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:51Z","lastTransitionTime":"2026-02-02T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.141711 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-9chjr"] Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.142487 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.142583 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.164822 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.182660 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.182818 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.182883 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.182927 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.183041 4842 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.183107 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 06:47:07.1830866 +0000 UTC m=+52.560354552 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.183199 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:47:07.183185032 +0000 UTC m=+52.560452974 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.183338 4842 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.183384 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 06:47:07.183371167 +0000 UTC m=+52.560639119 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.183490 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.183521 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.183541 4842 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.183630 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 06:47:07.183611892 +0000 UTC m=+52.560879834 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.186394 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.204212 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.227146 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.228833 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.228894 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.228917 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.228955 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.228975 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:51Z","lastTransitionTime":"2026-02-02T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.253055 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.271181 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.284533 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.284693 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.284733 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.284760 4842 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.284768 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5htc5\" (UniqueName: \"kubernetes.io/projected/4f6c3b51-669c-4c7b-a23a-ed68d139849e-kube-api-access-5htc5\") pod \"network-metrics-daemon-9chjr\" (UID: \"4f6c3b51-669c-4c7b-a23a-ed68d139849e\") " pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.284838 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 06:47:07.284815882 +0000 UTC m=+52.662083824 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.284912 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs\") pod \"network-metrics-daemon-9chjr\" (UID: \"4f6c3b51-669c-4c7b-a23a-ed68d139849e\") " pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.289309 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.304982 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.321482 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.332192 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.332279 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.332300 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.332326 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.332345 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:51Z","lastTransitionTime":"2026-02-02T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.345916 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.368611 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.384700 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 20:01:23.567711841 +0000 UTC Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.386353 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs\") pod \"network-metrics-daemon-9chjr\" (UID: \"4f6c3b51-669c-4c7b-a23a-ed68d139849e\") " pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.386457 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5htc5\" (UniqueName: \"kubernetes.io/projected/4f6c3b51-669c-4c7b-a23a-ed68d139849e-kube-api-access-5htc5\") pod \"network-metrics-daemon-9chjr\" (UID: \"4f6c3b51-669c-4c7b-a23a-ed68d139849e\") " pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.387038 4842 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.387175 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs podName:4f6c3b51-669c-4c7b-a23a-ed68d139849e nodeName:}" failed. No retries permitted until 2026-02-02 06:46:51.887145719 +0000 UTC m=+37.264413661 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs") pod "network-metrics-daemon-9chjr" (UID: "4f6c3b51-669c-4c7b-a23a-ed68d139849e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.397481 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.421487 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5htc5\" (UniqueName: \"kubernetes.io/projected/4f6c3b51-669c-4c7b-a23a-ed68d139849e-kube-api-access-5htc5\") pod \"network-metrics-daemon-9chjr\" (UID: \"4f6c3b51-669c-4c7b-a23a-ed68d139849e\") " pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.429618 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:46:47Z\\\",\\\"message\\\":\\\"-lifecycle-manager/packageserver-service_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/packageserver-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0202 06:46:47.800293 6264 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 06:46:47.800304 6264 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.432872 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.432933 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.433030 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.433070 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.433251 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.433411 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.435831 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.435899 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.435925 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.435960 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.435984 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:51Z","lastTransitionTime":"2026-02-02T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.458948 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.476149 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.496925 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.539573 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.539633 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.539650 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.539679 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.539706 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:51Z","lastTransitionTime":"2026-02-02T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.642708 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.642796 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.642813 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.642837 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.642857 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:51Z","lastTransitionTime":"2026-02-02T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.746878 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.746944 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.746961 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.746987 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.747004 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:51Z","lastTransitionTime":"2026-02-02T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.850424 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.850488 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.850504 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.850530 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.850548 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:51Z","lastTransitionTime":"2026-02-02T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.891763 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs\") pod \"network-metrics-daemon-9chjr\" (UID: \"4f6c3b51-669c-4c7b-a23a-ed68d139849e\") " pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.892066 4842 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.892165 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs podName:4f6c3b51-669c-4c7b-a23a-ed68d139849e nodeName:}" failed. No retries permitted until 2026-02-02 06:46:52.892138291 +0000 UTC m=+38.269406233 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs") pod "network-metrics-daemon-9chjr" (UID: "4f6c3b51-669c-4c7b-a23a-ed68d139849e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.954306 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.954366 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.954384 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.954409 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.954429 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:51Z","lastTransitionTime":"2026-02-02T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.057643 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.057700 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.057718 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.057742 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.057761 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:52Z","lastTransitionTime":"2026-02-02T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.161109 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.161181 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.161201 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.161248 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.161266 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:52Z","lastTransitionTime":"2026-02-02T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.264863 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.264928 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.264946 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.264972 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.264990 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:52Z","lastTransitionTime":"2026-02-02T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.367638 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.367680 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.367691 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.367707 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.367719 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:52Z","lastTransitionTime":"2026-02-02T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.385501 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 10:47:16.056349826 +0000 UTC Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.432848 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:46:52 crc kubenswrapper[4842]: E0202 06:46:52.433119 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.472463 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.472545 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.472578 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.472614 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.472635 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:52Z","lastTransitionTime":"2026-02-02T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.575604 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.575662 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.575672 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.575690 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.575700 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:52Z","lastTransitionTime":"2026-02-02T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.678104 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.678165 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.678182 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.678207 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.678264 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:52Z","lastTransitionTime":"2026-02-02T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.784272 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.784330 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.784350 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.784373 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.784397 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:52Z","lastTransitionTime":"2026-02-02T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.887370 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.887438 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.887448 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.887464 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.887473 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:52Z","lastTransitionTime":"2026-02-02T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.901015 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs\") pod \"network-metrics-daemon-9chjr\" (UID: \"4f6c3b51-669c-4c7b-a23a-ed68d139849e\") " pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:46:52 crc kubenswrapper[4842]: E0202 06:46:52.901251 4842 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 06:46:52 crc kubenswrapper[4842]: E0202 06:46:52.901379 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs podName:4f6c3b51-669c-4c7b-a23a-ed68d139849e nodeName:}" failed. No retries permitted until 2026-02-02 06:46:54.901351918 +0000 UTC m=+40.278619860 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs") pod "network-metrics-daemon-9chjr" (UID: "4f6c3b51-669c-4c7b-a23a-ed68d139849e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.971694 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.973141 4842 scope.go:117] "RemoveContainer" containerID="be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c" Feb 02 06:46:52 crc kubenswrapper[4842]: E0202 06:46:52.973424 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\"" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.989718 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.989802 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.989827 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.989861 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.989887 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:52Z","lastTransitionTime":"2026-02-02T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.092642 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.092707 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.092729 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.092757 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.092782 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:53Z","lastTransitionTime":"2026-02-02T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.195940 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.196003 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.196021 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.196053 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.196073 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:53Z","lastTransitionTime":"2026-02-02T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.299397 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.299470 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.299493 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.299520 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.299539 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:53Z","lastTransitionTime":"2026-02-02T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.386189 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 03:31:13.527043198 +0000 UTC Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.403884 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.403967 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.403984 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.404008 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.404025 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:53Z","lastTransitionTime":"2026-02-02T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.433525 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.433615 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:53 crc kubenswrapper[4842]: E0202 06:46:53.433700 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:46:53 crc kubenswrapper[4842]: E0202 06:46:53.433783 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.433873 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:53 crc kubenswrapper[4842]: E0202 06:46:53.434017 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.508499 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.508552 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.508569 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.508592 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.508610 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:53Z","lastTransitionTime":"2026-02-02T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.611560 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.611621 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.611638 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.611661 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.611678 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:53Z","lastTransitionTime":"2026-02-02T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.714364 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.714419 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.714436 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.714458 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.714475 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:53Z","lastTransitionTime":"2026-02-02T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.817614 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.817695 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.817718 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.817748 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.817771 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:53Z","lastTransitionTime":"2026-02-02T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.921384 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.921750 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.921883 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.922023 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.922154 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:53Z","lastTransitionTime":"2026-02-02T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.025889 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.025995 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.026018 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.026047 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.026066 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:54Z","lastTransitionTime":"2026-02-02T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.129415 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.129483 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.129503 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.129528 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.129546 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:54Z","lastTransitionTime":"2026-02-02T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.232115 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.232180 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.232197 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.232248 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.232268 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:54Z","lastTransitionTime":"2026-02-02T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.335274 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.335327 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.335345 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.335373 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.335390 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:54Z","lastTransitionTime":"2026-02-02T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.386938 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 11:47:39.105662089 +0000 UTC Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.432608 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:46:54 crc kubenswrapper[4842]: E0202 06:46:54.432815 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.441430 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.441522 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.441540 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.441563 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.441580 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:54Z","lastTransitionTime":"2026-02-02T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.544550 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.544617 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.544657 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.544692 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.544714 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:54Z","lastTransitionTime":"2026-02-02T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.648186 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.648382 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.648422 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.648504 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.648529 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:54Z","lastTransitionTime":"2026-02-02T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.751810 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.751845 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.751858 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.751874 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.751885 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:54Z","lastTransitionTime":"2026-02-02T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.855510 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.855585 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.855611 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.855658 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.855683 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:54Z","lastTransitionTime":"2026-02-02T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.926617 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs\") pod \"network-metrics-daemon-9chjr\" (UID: \"4f6c3b51-669c-4c7b-a23a-ed68d139849e\") " pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:46:54 crc kubenswrapper[4842]: E0202 06:46:54.926822 4842 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 06:46:54 crc kubenswrapper[4842]: E0202 06:46:54.926961 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs podName:4f6c3b51-669c-4c7b-a23a-ed68d139849e nodeName:}" failed. No retries permitted until 2026-02-02 06:46:58.926932055 +0000 UTC m=+44.304199997 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs") pod "network-metrics-daemon-9chjr" (UID: "4f6c3b51-669c-4c7b-a23a-ed68d139849e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.959453 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.959542 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.959569 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.959605 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.959629 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:54Z","lastTransitionTime":"2026-02-02T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.062886 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.062953 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.062969 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.062990 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.063005 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:55Z","lastTransitionTime":"2026-02-02T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.166062 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.166112 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.166123 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.166139 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.166149 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:55Z","lastTransitionTime":"2026-02-02T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.269589 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.269652 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.269669 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.269695 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.269713 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:55Z","lastTransitionTime":"2026-02-02T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.373209 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.373310 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.373328 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.373353 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.373370 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:55Z","lastTransitionTime":"2026-02-02T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.388160 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 13:05:45.349260439 +0000 UTC Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.432967 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.433039 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.433039 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:55 crc kubenswrapper[4842]: E0202 06:46:55.433181 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:46:55 crc kubenswrapper[4842]: E0202 06:46:55.433373 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:46:55 crc kubenswrapper[4842]: E0202 06:46:55.433525 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.456299 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.476200 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.476796 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.476839 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.476857 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.476879 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.476898 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:55Z","lastTransitionTime":"2026-02-02T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.495425 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.522840 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.543209 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.566133 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.579260 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.579314 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.579337 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.579365 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.579387 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:55Z","lastTransitionTime":"2026-02-02T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.587704 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.602065 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.617209 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.633484 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.652051 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.670963 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.682350 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.682421 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.682439 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.682466 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.682486 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:55Z","lastTransitionTime":"2026-02-02T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.696188 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.723166 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.742679 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.779754 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:46:47Z\\\",\\\"message\\\":\\\"-lifecycle-manager/packageserver-service_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/packageserver-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0202 06:46:47.800293 6264 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 06:46:47.800304 6264 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.786040 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.786108 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.786129 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.786158 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.786180 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:55Z","lastTransitionTime":"2026-02-02T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.889751 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.889801 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.889819 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.889843 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.889860 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:55Z","lastTransitionTime":"2026-02-02T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.974898 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.974956 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.974975 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.974997 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.975014 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:55Z","lastTransitionTime":"2026-02-02T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:55 crc kubenswrapper[4842]: E0202 06:46:55.996922 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.002000 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.002066 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.002087 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.002146 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.002166 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:56Z","lastTransitionTime":"2026-02-02T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:56 crc kubenswrapper[4842]: E0202 06:46:56.045987 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:56Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.055609 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.055679 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.055704 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.055734 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.055755 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:56Z","lastTransitionTime":"2026-02-02T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:56 crc kubenswrapper[4842]: E0202 06:46:56.079042 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:56Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.084954 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.085004 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.085019 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.085043 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.085058 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:56Z","lastTransitionTime":"2026-02-02T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:56 crc kubenswrapper[4842]: E0202 06:46:56.101733 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:56Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.105749 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.105788 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.105799 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.105817 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.105829 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:56Z","lastTransitionTime":"2026-02-02T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:56 crc kubenswrapper[4842]: E0202 06:46:56.122433 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:56Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:56 crc kubenswrapper[4842]: E0202 06:46:56.122628 4842 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.124474 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.124548 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.124563 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.124589 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.124605 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:56Z","lastTransitionTime":"2026-02-02T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.227608 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.227699 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.227721 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.227754 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.227775 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:56Z","lastTransitionTime":"2026-02-02T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.330327 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.330398 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.330423 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.330452 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.330473 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:56Z","lastTransitionTime":"2026-02-02T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.388755 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 20:52:49.015804966 +0000 UTC Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.432894 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:46:56 crc kubenswrapper[4842]: E0202 06:46:56.433109 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.433409 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.433441 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.433457 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.433478 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.433494 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:56Z","lastTransitionTime":"2026-02-02T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.536878 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.536976 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.536997 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.537022 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.537040 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:56Z","lastTransitionTime":"2026-02-02T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.640383 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.640439 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.640455 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.640479 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.640497 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:56Z","lastTransitionTime":"2026-02-02T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.743427 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.743487 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.743507 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.743532 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.743550 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:56Z","lastTransitionTime":"2026-02-02T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.846763 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.846838 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.846857 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.846880 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.846897 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:56Z","lastTransitionTime":"2026-02-02T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.950138 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.950201 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.950250 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.950282 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.950301 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:56Z","lastTransitionTime":"2026-02-02T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.053425 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.053575 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.053596 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.053619 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.053635 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:57Z","lastTransitionTime":"2026-02-02T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.156786 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.156837 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.156853 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.156879 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.156895 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:57Z","lastTransitionTime":"2026-02-02T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.259543 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.259670 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.259705 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.259755 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.259787 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:57Z","lastTransitionTime":"2026-02-02T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.362872 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.362931 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.362945 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.362968 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.362982 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:57Z","lastTransitionTime":"2026-02-02T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.389659 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 07:09:01.410578989 +0000 UTC Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.432871 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.433031 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:57 crc kubenswrapper[4842]: E0202 06:46:57.433095 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.432871 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:57 crc kubenswrapper[4842]: E0202 06:46:57.433294 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:46:57 crc kubenswrapper[4842]: E0202 06:46:57.433404 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.468355 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.468429 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.468451 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.468494 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.468520 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:57Z","lastTransitionTime":"2026-02-02T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.571962 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.572034 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.572053 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.572081 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.572103 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:57Z","lastTransitionTime":"2026-02-02T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.675731 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.675810 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.675828 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.675857 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.675879 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:57Z","lastTransitionTime":"2026-02-02T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.779551 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.779624 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.779640 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.779664 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.779682 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:57Z","lastTransitionTime":"2026-02-02T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.883547 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.883635 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.883655 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.883691 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.883714 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:57Z","lastTransitionTime":"2026-02-02T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.986755 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.986865 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.986888 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.986921 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.986941 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:57Z","lastTransitionTime":"2026-02-02T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.090781 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.090868 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.090887 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.090920 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.090941 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:58Z","lastTransitionTime":"2026-02-02T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.193990 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.194056 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.194077 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.194106 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.194125 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:58Z","lastTransitionTime":"2026-02-02T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.298014 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.298088 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.298115 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.298145 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.298166 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:58Z","lastTransitionTime":"2026-02-02T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.390782 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 22:10:21.106797238 +0000 UTC Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.402069 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.402152 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.402176 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.402206 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.402249 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:58Z","lastTransitionTime":"2026-02-02T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.433539 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:46:58 crc kubenswrapper[4842]: E0202 06:46:58.433784 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.505585 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.505637 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.505652 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.505670 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.505681 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:58Z","lastTransitionTime":"2026-02-02T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.609546 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.609605 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.609623 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.609650 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.609669 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:58Z","lastTransitionTime":"2026-02-02T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.713577 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.713647 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.713665 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.713691 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.713708 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:58Z","lastTransitionTime":"2026-02-02T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.816654 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.816727 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.816751 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.816782 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.816804 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:58Z","lastTransitionTime":"2026-02-02T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.919909 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.920017 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.920035 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.920063 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.920087 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:58Z","lastTransitionTime":"2026-02-02T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.983069 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs\") pod \"network-metrics-daemon-9chjr\" (UID: \"4f6c3b51-669c-4c7b-a23a-ed68d139849e\") " pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:46:58 crc kubenswrapper[4842]: E0202 06:46:58.983446 4842 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 06:46:58 crc kubenswrapper[4842]: E0202 06:46:58.983628 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs podName:4f6c3b51-669c-4c7b-a23a-ed68d139849e nodeName:}" failed. No retries permitted until 2026-02-02 06:47:06.983586834 +0000 UTC m=+52.360854916 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs") pod "network-metrics-daemon-9chjr" (UID: "4f6c3b51-669c-4c7b-a23a-ed68d139849e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.023396 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.023684 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.023860 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.023998 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.024156 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:59Z","lastTransitionTime":"2026-02-02T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.127546 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.127611 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.127629 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.127655 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.127673 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:59Z","lastTransitionTime":"2026-02-02T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.231786 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.231872 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.231890 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.231916 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.231933 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:59Z","lastTransitionTime":"2026-02-02T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.335187 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.335307 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.335329 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.335356 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.335375 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:59Z","lastTransitionTime":"2026-02-02T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.391565 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 13:07:38.155641059 +0000 UTC Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.433456 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.433528 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.433551 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:59 crc kubenswrapper[4842]: E0202 06:46:59.433699 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:46:59 crc kubenswrapper[4842]: E0202 06:46:59.433894 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:46:59 crc kubenswrapper[4842]: E0202 06:46:59.434132 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.438819 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.438881 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.438901 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.438931 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.438956 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:59Z","lastTransitionTime":"2026-02-02T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.543068 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.543139 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.543156 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.543183 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.543202 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:59Z","lastTransitionTime":"2026-02-02T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.646592 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.646641 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.646654 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.646671 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.646683 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:59Z","lastTransitionTime":"2026-02-02T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.749345 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.749403 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.749420 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.749443 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.749461 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:59Z","lastTransitionTime":"2026-02-02T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.852847 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.852927 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.852952 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.852982 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.853004 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:59Z","lastTransitionTime":"2026-02-02T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.956410 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.956465 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.956478 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.956496 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.956508 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:59Z","lastTransitionTime":"2026-02-02T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.060446 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.060864 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.061049 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.061421 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.061645 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:00Z","lastTransitionTime":"2026-02-02T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.164958 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.165082 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.165106 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.165136 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.165159 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:00Z","lastTransitionTime":"2026-02-02T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.273399 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.273472 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.273487 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.273507 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.273522 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:00Z","lastTransitionTime":"2026-02-02T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.376845 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.376903 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.376921 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.376950 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.376975 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:00Z","lastTransitionTime":"2026-02-02T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.391939 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 20:38:34.269235358 +0000 UTC Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.432883 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:00 crc kubenswrapper[4842]: E0202 06:47:00.433075 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.480741 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.480820 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.480845 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.480873 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.480896 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:00Z","lastTransitionTime":"2026-02-02T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.585954 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.586008 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.586025 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.586050 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.586069 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:00Z","lastTransitionTime":"2026-02-02T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.689160 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.689239 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.689250 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.689268 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.689282 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:00Z","lastTransitionTime":"2026-02-02T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.792778 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.792842 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.792860 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.792885 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.792902 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:00Z","lastTransitionTime":"2026-02-02T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.896154 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.896206 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.896246 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.896269 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.896283 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:00Z","lastTransitionTime":"2026-02-02T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.998872 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.998935 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.998946 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.998963 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.998976 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:00Z","lastTransitionTime":"2026-02-02T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.101506 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.101590 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.101612 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.101642 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.101662 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:01Z","lastTransitionTime":"2026-02-02T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.205295 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.205412 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.205436 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.205466 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.205488 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:01Z","lastTransitionTime":"2026-02-02T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.308292 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.308470 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.308496 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.308524 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.308549 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:01Z","lastTransitionTime":"2026-02-02T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.392979 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 02:26:29.403231608 +0000 UTC Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.411559 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.411606 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.411624 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.411645 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.411661 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:01Z","lastTransitionTime":"2026-02-02T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.433156 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.433206 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:01 crc kubenswrapper[4842]: E0202 06:47:01.433358 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.433397 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:01 crc kubenswrapper[4842]: E0202 06:47:01.433543 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:01 crc kubenswrapper[4842]: E0202 06:47:01.433682 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.514146 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.514209 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.514262 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.514286 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.514303 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:01Z","lastTransitionTime":"2026-02-02T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.617710 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.617778 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.617795 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.617819 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.617837 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:01Z","lastTransitionTime":"2026-02-02T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.720532 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.720593 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.720604 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.720624 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.720635 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:01Z","lastTransitionTime":"2026-02-02T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.824140 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.824247 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.824266 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.824292 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.824311 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:01Z","lastTransitionTime":"2026-02-02T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.927528 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.927598 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.927617 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.927642 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.927661 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:01Z","lastTransitionTime":"2026-02-02T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.030980 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.031046 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.031063 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.031086 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.031102 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:02Z","lastTransitionTime":"2026-02-02T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.134902 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.135013 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.135032 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.135058 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.135074 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:02Z","lastTransitionTime":"2026-02-02T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.238354 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.238437 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.238459 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.238494 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.238519 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:02Z","lastTransitionTime":"2026-02-02T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.341371 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.341481 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.341502 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.341528 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.341548 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:02Z","lastTransitionTime":"2026-02-02T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.393320 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 17:41:16.779465721 +0000 UTC Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.433070 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:02 crc kubenswrapper[4842]: E0202 06:47:02.433289 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.444051 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.444149 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.444170 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.444192 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.444210 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:02Z","lastTransitionTime":"2026-02-02T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.546939 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.547006 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.547030 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.547060 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.547084 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:02Z","lastTransitionTime":"2026-02-02T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.649592 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.649714 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.649733 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.649761 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.649779 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:02Z","lastTransitionTime":"2026-02-02T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.753088 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.753140 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.753157 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.753179 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.753196 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:02Z","lastTransitionTime":"2026-02-02T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.857278 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.857309 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.857317 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.857329 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.857338 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:02Z","lastTransitionTime":"2026-02-02T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.959572 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.959609 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.959617 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.959631 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.959639 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:02Z","lastTransitionTime":"2026-02-02T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.062779 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.062841 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.062853 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.062869 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.062881 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:03Z","lastTransitionTime":"2026-02-02T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.166492 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.166571 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.166595 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.166619 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.166638 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:03Z","lastTransitionTime":"2026-02-02T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.269643 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.269707 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.269728 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.269753 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.269771 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:03Z","lastTransitionTime":"2026-02-02T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.372891 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.372939 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.372956 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.372978 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.372995 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:03Z","lastTransitionTime":"2026-02-02T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.393676 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 03:32:28.465455856 +0000 UTC Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.432550 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.432622 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.432654 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:03 crc kubenswrapper[4842]: E0202 06:47:03.432795 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:03 crc kubenswrapper[4842]: E0202 06:47:03.432862 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:03 crc kubenswrapper[4842]: E0202 06:47:03.432921 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.478477 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.478717 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.478890 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.479503 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.479540 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:03Z","lastTransitionTime":"2026-02-02T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.582742 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.582797 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.582814 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.582837 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.582855 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:03Z","lastTransitionTime":"2026-02-02T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.686936 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.686989 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.686999 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.687013 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.687025 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:03Z","lastTransitionTime":"2026-02-02T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.790440 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.790522 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.790547 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.790584 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.790606 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:03Z","lastTransitionTime":"2026-02-02T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.893444 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.893505 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.893522 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.893545 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.893580 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:03Z","lastTransitionTime":"2026-02-02T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.997440 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.997494 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.997510 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.997533 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.997549 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:03Z","lastTransitionTime":"2026-02-02T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.101279 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.101332 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.101349 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.101371 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.101389 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:04Z","lastTransitionTime":"2026-02-02T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.204738 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.204802 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.204815 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.204840 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.204860 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:04Z","lastTransitionTime":"2026-02-02T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.307507 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.307559 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.307574 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.307591 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.307605 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:04Z","lastTransitionTime":"2026-02-02T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.394825 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 00:26:31.71126672 +0000 UTC Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.409682 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.409757 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.409774 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.409798 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.409816 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:04Z","lastTransitionTime":"2026-02-02T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.433200 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:04 crc kubenswrapper[4842]: E0202 06:47:04.433708 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.511855 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.512268 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.512409 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.512576 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.512718 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:04Z","lastTransitionTime":"2026-02-02T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.615751 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.615811 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.615829 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.615854 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.615892 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:04Z","lastTransitionTime":"2026-02-02T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.718604 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.718654 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.718672 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.718734 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.718752 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:04Z","lastTransitionTime":"2026-02-02T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.821998 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.822061 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.822079 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.822102 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.822119 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:04Z","lastTransitionTime":"2026-02-02T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.924631 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.924689 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.924705 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.924728 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.924745 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:04Z","lastTransitionTime":"2026-02-02T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.027392 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.027483 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.027510 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.027541 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.027568 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:05Z","lastTransitionTime":"2026-02-02T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.130586 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.130639 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.130656 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.130679 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.130697 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:05Z","lastTransitionTime":"2026-02-02T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.233590 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.233636 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.233695 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.233717 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.233732 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:05Z","lastTransitionTime":"2026-02-02T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.336594 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.337104 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.337189 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.337299 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.337384 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:05Z","lastTransitionTime":"2026-02-02T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.396106 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 18:26:39.887517205 +0000 UTC Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.432583 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.432583 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.432610 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:05 crc kubenswrapper[4842]: E0202 06:47:05.432832 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:05 crc kubenswrapper[4842]: E0202 06:47:05.433522 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:05 crc kubenswrapper[4842]: E0202 06:47:05.433615 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.434392 4842 scope.go:117] "RemoveContainer" containerID="be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.441685 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.441737 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.441754 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.441778 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.441797 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:05Z","lastTransitionTime":"2026-02-02T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.462529 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.480872 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.497840 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.514784 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.529464 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.541735 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.544249 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.544339 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.544359 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.544379 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.544428 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:05Z","lastTransitionTime":"2026-02-02T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.560452 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.575591 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.589203 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.606406 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.625923 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:46:47Z\\\",\\\"message\\\":\\\"-lifecycle-manager/packageserver-service_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/packageserver-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0202 06:46:47.800293 6264 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 06:46:47.800304 6264 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.641553 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.649695 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.650079 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.650159 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.650272 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.650363 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:05Z","lastTransitionTime":"2026-02-02T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.656238 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.669495 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.689512 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.708311 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.752976 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.753033 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.753050 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.753073 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.753091 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:05Z","lastTransitionTime":"2026-02-02T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.855875 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.855916 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.855926 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.855938 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.855947 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:05Z","lastTransitionTime":"2026-02-02T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.870927 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovnkube-controller/1.log" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.873167 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerStarted","Data":"d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951"} Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.873526 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.893176 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.912137 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.930199 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.953404 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.957905 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.957970 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.957993 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.958024 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.958048 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:05Z","lastTransitionTime":"2026-02-02T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.969453 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.985138 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.998583 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.023125 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.058641 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.060706 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.060736 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.060745 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.060758 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.060766 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:06Z","lastTransitionTime":"2026-02-02T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.078025 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.094507 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.118970 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:46:47Z\\\",\\\"message\\\":\\\"-lifecycle-manager/packageserver-service_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/packageserver-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0202 06:46:47.800293 6264 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 06:46:47.800304 6264 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.138118 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.158749 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.164348 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.164414 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.164427 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.164451 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.164466 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:06Z","lastTransitionTime":"2026-02-02T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.173022 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.176592 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.197425 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.207179 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.231689 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.231747 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.231755 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.231769 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.231779 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:06Z","lastTransitionTime":"2026-02-02T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:06 crc kubenswrapper[4842]: E0202 06:47:06.248057 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.248736 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:46:47Z\\\",\\\"message\\\":\\\"-lifecycle-manager/packageserver-service_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/packageserver-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0202 06:46:47.800293 6264 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 06:46:47.800304 6264 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.252995 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.253034 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.253045 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.253061 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.253072 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:06Z","lastTransitionTime":"2026-02-02T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.271030 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: E0202 06:47:06.276145 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.280374 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.280538 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.280650 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.280750 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.280835 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:06Z","lastTransitionTime":"2026-02-02T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.285160 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: E0202 06:47:06.294571 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.298347 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.298405 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.298419 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.298440 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.298453 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:06Z","lastTransitionTime":"2026-02-02T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.300271 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: E0202 06:47:06.314480 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.316004 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.318820 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.318895 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.318913 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.318939 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.318959 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:06Z","lastTransitionTime":"2026-02-02T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.332616 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: E0202 06:47:06.333568 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: E0202 06:47:06.333721 4842 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.336183 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.336239 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.336257 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.336280 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.336300 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:06Z","lastTransitionTime":"2026-02-02T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.350873 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.362836 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b888f8bf-78c9-4e73-bfa5-521f549b345e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bea776dbb154f5435006d46f8f410c0b0cb8c955f594cf39e4b707d4d99e619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356ee9ccf90dd6a4aade1846889e97e195457f8a54c572eb8c8fd216fb5315f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f2d3d4011b1076ea5c6892ec39059c3c43c73860bae0828cd0fa3b2c86cccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.380768 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.396376 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 19:29:56.279199742 +0000 UTC Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.397108 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.414057 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.428044 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.432601 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:06 crc kubenswrapper[4842]: E0202 06:47:06.432812 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.439408 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.439457 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.439471 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.439492 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.439507 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:06Z","lastTransitionTime":"2026-02-02T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.443868 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.468430 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.486924 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.502491 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.514467 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.542089 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.542149 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.542165 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.542189 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.542207 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:06Z","lastTransitionTime":"2026-02-02T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.646067 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.646146 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.646172 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.646200 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.646254 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:06Z","lastTransitionTime":"2026-02-02T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.750297 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.750361 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.750379 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.750403 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.750421 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:06Z","lastTransitionTime":"2026-02-02T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.853798 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.853872 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.853891 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.853914 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.853928 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:06Z","lastTransitionTime":"2026-02-02T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.881590 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovnkube-controller/2.log" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.882560 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovnkube-controller/1.log" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.886693 4842 generic.go:334] "Generic (PLEG): container finished" podID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerID="d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951" exitCode=1 Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.886776 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerDied","Data":"d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951"} Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.886850 4842 scope.go:117] "RemoveContainer" containerID="be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.887876 4842 scope.go:117] "RemoveContainer" containerID="d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951" Feb 02 06:47:06 crc kubenswrapper[4842]: E0202 06:47:06.888122 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\"" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.912390 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.931490 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.946645 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.956504 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.956565 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.956584 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.956609 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.956628 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:06Z","lastTransitionTime":"2026-02-02T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.962087 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.990504 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:46:47Z\\\",\\\"message\\\":\\\"-lifecycle-manager/packageserver-service_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/packageserver-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0202 06:46:47.800293 6264 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 06:46:47.800304 6264 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"F0202 06:47:06.480989 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z]\\\\nI0202 06:47:06.480978 6477 services_controller.go:451] Built service openshift-kube-controller-manager/kube-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.009868 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.026996 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.041677 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.059244 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.059284 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.059295 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.059312 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.059325 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:07Z","lastTransitionTime":"2026-02-02T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.062165 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.069622 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs\") pod \"network-metrics-daemon-9chjr\" (UID: \"4f6c3b51-669c-4c7b-a23a-ed68d139849e\") " pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.069747 4842 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.069831 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs podName:4f6c3b51-669c-4c7b-a23a-ed68d139849e nodeName:}" failed. No retries permitted until 2026-02-02 06:47:23.069812358 +0000 UTC m=+68.447080270 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs") pod "network-metrics-daemon-9chjr" (UID: "4f6c3b51-669c-4c7b-a23a-ed68d139849e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.077101 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.093855 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.109774 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b888f8bf-78c9-4e73-bfa5-521f549b345e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bea776dbb154f5435006d46f8f410c0b0cb8c955f594cf39e4b707d4d99e619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356ee9ccf90dd6a4aade1846889e97e195457f8a54c572eb8c8fd216fb5315f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f2d3d4011b1076ea5c6892ec39059c3c43c73860bae0828cd0fa3b2c86cccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.130745 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.147424 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.157834 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.162727 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.162753 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.162763 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.162777 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.162788 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:07Z","lastTransitionTime":"2026-02-02T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.175929 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.189310 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.265436 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.265470 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.265478 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.265491 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.265499 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:07Z","lastTransitionTime":"2026-02-02T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.271710 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.271803 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.271840 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.271874 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.271939 4842 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.271979 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 06:47:39.271965771 +0000 UTC m=+84.649233673 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.272038 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:47:39.272031572 +0000 UTC m=+84.649299484 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.272088 4842 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.272108 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 06:47:39.272102844 +0000 UTC m=+84.649370756 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.272154 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.272164 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.272173 4842 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.272194 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 06:47:39.272188636 +0000 UTC m=+84.649456548 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.369001 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.369069 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.369092 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.369120 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.369142 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:07Z","lastTransitionTime":"2026-02-02T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.372983 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.373177 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.373212 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.373273 4842 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.373364 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 06:47:39.373336674 +0000 UTC m=+84.750604616 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.397500 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 16:22:13.189612462 +0000 UTC Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.433198 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.433262 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.433287 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.433428 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.433547 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.433655 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.472388 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.472483 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.472502 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.472527 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.472544 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:07Z","lastTransitionTime":"2026-02-02T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.575869 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.575924 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.575943 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.575966 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.575984 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:07Z","lastTransitionTime":"2026-02-02T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.679039 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.679090 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.679107 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.679129 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.679147 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:07Z","lastTransitionTime":"2026-02-02T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.783625 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.783678 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.783698 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.783727 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.783748 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:07Z","lastTransitionTime":"2026-02-02T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.888480 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.888564 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.888622 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.888946 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.888989 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:07Z","lastTransitionTime":"2026-02-02T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.894861 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovnkube-controller/2.log" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.905617 4842 scope.go:117] "RemoveContainer" containerID="d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951" Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.905892 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\"" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.925598 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.944030 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.962137 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.981511 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.992140 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.992204 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.992267 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.992300 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.992322 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:07Z","lastTransitionTime":"2026-02-02T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:07.999931 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.018581 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:08Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.034357 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:08Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.049665 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:08Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.069186 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:08Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.084814 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:08Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.095132 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.095191 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.095210 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.095276 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.095300 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:08Z","lastTransitionTime":"2026-02-02T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.105116 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:08Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.126667 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:08Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.147142 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:08Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.181392 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"F0202 06:47:06.480989 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z]\\\\nI0202 06:47:06.480978 6477 services_controller.go:451] Built service openshift-kube-controller-manager/kube-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:47:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:08Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.198462 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.198525 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.198551 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.198583 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.198605 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:08Z","lastTransitionTime":"2026-02-02T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.207320 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:08Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.226007 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b888f8bf-78c9-4e73-bfa5-521f549b345e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bea776dbb154f5435006d46f8f410c0b0cb8c955f594cf39e4b707d4d99e619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356ee9ccf90dd6a4aade1846889e97e195457f8a54c572eb8c8fd216fb5315f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f2d3d4011b1076ea5c6892ec39059c3c43c73860bae0828cd0fa3b2c86cccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:08Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.247207 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:08Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.303095 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.303187 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.303211 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.303702 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.303999 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:08Z","lastTransitionTime":"2026-02-02T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.398426 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 21:53:28.093596147 +0000 UTC Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.408082 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.408160 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.408185 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.408260 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.408286 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:08Z","lastTransitionTime":"2026-02-02T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.433370 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:08 crc kubenswrapper[4842]: E0202 06:47:08.433548 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.511075 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.511132 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.511150 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.511178 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.511196 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:08Z","lastTransitionTime":"2026-02-02T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.614688 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.614746 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.614765 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.614792 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.614810 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:08Z","lastTransitionTime":"2026-02-02T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.717747 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.717801 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.717817 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.717841 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.717858 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:08Z","lastTransitionTime":"2026-02-02T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.821350 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.821410 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.821429 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.821454 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.821473 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:08Z","lastTransitionTime":"2026-02-02T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.924998 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.925049 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.925057 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.925075 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.925091 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:08Z","lastTransitionTime":"2026-02-02T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.029256 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.029322 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.029345 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.029367 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.029381 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:09Z","lastTransitionTime":"2026-02-02T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.132855 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.132911 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.132924 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.132947 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.132959 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:09Z","lastTransitionTime":"2026-02-02T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.236045 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.236106 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.236124 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.236151 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.236169 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:09Z","lastTransitionTime":"2026-02-02T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.338451 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.338515 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.338537 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.338567 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.338585 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:09Z","lastTransitionTime":"2026-02-02T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.398737 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 16:24:17.076782307 +0000 UTC Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.433408 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.433462 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.433576 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:09 crc kubenswrapper[4842]: E0202 06:47:09.433803 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:09 crc kubenswrapper[4842]: E0202 06:47:09.434102 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:09 crc kubenswrapper[4842]: E0202 06:47:09.434199 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.446911 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.446979 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.446998 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.447028 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.447051 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:09Z","lastTransitionTime":"2026-02-02T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.550755 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.550817 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.550837 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.550863 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.550883 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:09Z","lastTransitionTime":"2026-02-02T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.654579 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.654635 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.654646 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.654666 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.654679 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:09Z","lastTransitionTime":"2026-02-02T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.758355 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.758421 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.758433 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.758450 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.758466 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:09Z","lastTransitionTime":"2026-02-02T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.862411 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.862462 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.862477 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.862502 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.862514 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:09Z","lastTransitionTime":"2026-02-02T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.966375 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.966429 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.966445 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.966466 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.966484 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:09Z","lastTransitionTime":"2026-02-02T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.069844 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.069905 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.069926 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.069957 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.069978 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:10Z","lastTransitionTime":"2026-02-02T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.173124 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.173176 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.173188 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.173206 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.173240 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:10Z","lastTransitionTime":"2026-02-02T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.276461 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.276509 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.276522 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.276542 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.276555 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:10Z","lastTransitionTime":"2026-02-02T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.379922 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.379997 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.380017 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.380048 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.380068 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:10Z","lastTransitionTime":"2026-02-02T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.400008 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 20:16:43.424681894 +0000 UTC Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.433489 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:10 crc kubenswrapper[4842]: E0202 06:47:10.433640 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.481943 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.481983 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.481992 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.482008 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.482018 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:10Z","lastTransitionTime":"2026-02-02T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.584274 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.584311 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.584319 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.584332 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.584341 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:10Z","lastTransitionTime":"2026-02-02T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.687548 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.687617 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.687637 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.687662 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.687683 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:10Z","lastTransitionTime":"2026-02-02T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.790426 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.790462 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.790471 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.790485 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.790493 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:10Z","lastTransitionTime":"2026-02-02T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.894702 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.894758 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.894776 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.894803 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.894825 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:10Z","lastTransitionTime":"2026-02-02T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.998524 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.998597 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.998620 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.998648 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.998667 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:10Z","lastTransitionTime":"2026-02-02T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.102425 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.102494 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.102516 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.102550 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.102576 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:11Z","lastTransitionTime":"2026-02-02T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.205988 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.206052 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.206065 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.206119 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.206139 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:11Z","lastTransitionTime":"2026-02-02T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.309193 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.309278 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.309309 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.309339 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.309361 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:11Z","lastTransitionTime":"2026-02-02T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.400882 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 11:33:31.766034969 +0000 UTC Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.412920 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.412982 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.413001 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.413025 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.413045 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:11Z","lastTransitionTime":"2026-02-02T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.433548 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.433645 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.433853 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:11 crc kubenswrapper[4842]: E0202 06:47:11.433855 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:11 crc kubenswrapper[4842]: E0202 06:47:11.434080 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:11 crc kubenswrapper[4842]: E0202 06:47:11.434452 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.517108 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.517196 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.517253 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.517285 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.517306 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:11Z","lastTransitionTime":"2026-02-02T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.620176 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.620207 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.620251 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.620265 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.620278 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:11Z","lastTransitionTime":"2026-02-02T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.729011 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.729113 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.729134 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.729333 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.729367 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:11Z","lastTransitionTime":"2026-02-02T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.832300 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.832371 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.832394 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.832420 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.832438 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:11Z","lastTransitionTime":"2026-02-02T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.935461 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.935532 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.935551 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.935577 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.935596 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:11Z","lastTransitionTime":"2026-02-02T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.040357 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.040428 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.040448 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.040478 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.040497 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:12Z","lastTransitionTime":"2026-02-02T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.144058 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.144137 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.144155 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.144185 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.144204 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:12Z","lastTransitionTime":"2026-02-02T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.247793 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.247871 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.247895 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.247926 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.247950 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:12Z","lastTransitionTime":"2026-02-02T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.351337 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.351425 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.351440 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.351470 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.351490 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:12Z","lastTransitionTime":"2026-02-02T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.401848 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 00:21:36.895349 +0000 UTC Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.432864 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:12 crc kubenswrapper[4842]: E0202 06:47:12.433052 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.455313 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.455414 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.455439 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.455474 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.455498 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:12Z","lastTransitionTime":"2026-02-02T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.559448 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.559543 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.559562 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.559598 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.559625 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:12Z","lastTransitionTime":"2026-02-02T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.662580 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.662638 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.662656 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.662679 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.662700 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:12Z","lastTransitionTime":"2026-02-02T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.766083 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.766155 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.766175 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.766203 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.766254 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:12Z","lastTransitionTime":"2026-02-02T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.869748 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.869821 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.869839 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.869864 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.869884 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:12Z","lastTransitionTime":"2026-02-02T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.974007 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.974108 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.974136 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.974172 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.974198 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:12Z","lastTransitionTime":"2026-02-02T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.077925 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.077995 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.078007 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.078031 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.078049 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:13Z","lastTransitionTime":"2026-02-02T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.181417 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.181474 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.181493 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.181523 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.181540 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:13Z","lastTransitionTime":"2026-02-02T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.285182 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.285280 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.285300 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.285332 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.285360 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:13Z","lastTransitionTime":"2026-02-02T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.388301 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.388343 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.388352 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.388374 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.388388 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:13Z","lastTransitionTime":"2026-02-02T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.402978 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 23:31:22.418927952 +0000 UTC Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.433331 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.433378 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.433504 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:13 crc kubenswrapper[4842]: E0202 06:47:13.433762 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:13 crc kubenswrapper[4842]: E0202 06:47:13.433910 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:13 crc kubenswrapper[4842]: E0202 06:47:13.434137 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.492070 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.492144 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.492168 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.492202 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.492271 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:13Z","lastTransitionTime":"2026-02-02T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.595817 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.595887 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.595906 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.595939 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.595963 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:13Z","lastTransitionTime":"2026-02-02T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.699329 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.699447 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.699467 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.699492 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.699511 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:13Z","lastTransitionTime":"2026-02-02T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.802625 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.803182 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.803298 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.803398 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.803526 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:13Z","lastTransitionTime":"2026-02-02T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.906812 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.906853 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.906863 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.906878 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.906888 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:13Z","lastTransitionTime":"2026-02-02T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.009995 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.010051 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.010069 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.010093 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.010110 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:14Z","lastTransitionTime":"2026-02-02T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.113547 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.113614 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.113639 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.113666 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.113685 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:14Z","lastTransitionTime":"2026-02-02T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.216362 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.216425 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.216442 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.216470 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.216489 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:14Z","lastTransitionTime":"2026-02-02T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.319568 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.319625 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.319644 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.319667 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.319684 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:14Z","lastTransitionTime":"2026-02-02T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.404164 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 20:06:13.661010312 +0000 UTC Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.422560 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.422624 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.422638 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.422658 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.422673 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:14Z","lastTransitionTime":"2026-02-02T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.433119 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:14 crc kubenswrapper[4842]: E0202 06:47:14.433383 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.525803 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.525865 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.525886 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.525911 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.525929 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:14Z","lastTransitionTime":"2026-02-02T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.628870 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.628939 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.628959 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.628983 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.629001 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:14Z","lastTransitionTime":"2026-02-02T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.732812 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.732876 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.732888 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.732908 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.732923 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:14Z","lastTransitionTime":"2026-02-02T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.836152 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.836250 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.836279 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.836307 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.836327 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:14Z","lastTransitionTime":"2026-02-02T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.939443 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.939490 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.939503 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.939522 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.939537 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:14Z","lastTransitionTime":"2026-02-02T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.043374 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.043445 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.043473 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.043503 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.043528 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:15Z","lastTransitionTime":"2026-02-02T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.147056 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.147112 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.147129 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.147232 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.147261 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:15Z","lastTransitionTime":"2026-02-02T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.250525 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.250588 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.250609 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.250636 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.250655 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:15Z","lastTransitionTime":"2026-02-02T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.354252 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.354704 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.354913 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.355122 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.355362 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:15Z","lastTransitionTime":"2026-02-02T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.404442 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 17:40:26.332310697 +0000 UTC Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.436151 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:15 crc kubenswrapper[4842]: E0202 06:47:15.436602 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.436897 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:15 crc kubenswrapper[4842]: E0202 06:47:15.436963 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.437131 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:15 crc kubenswrapper[4842]: E0202 06:47:15.437316 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.457612 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.457660 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.457678 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.457700 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.457717 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:15Z","lastTransitionTime":"2026-02-02T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.461974 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.480876 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.506324 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.522314 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.536810 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.553920 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.559980 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.560047 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.560073 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.560107 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.560135 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:15Z","lastTransitionTime":"2026-02-02T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.569994 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.588108 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.611313 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"F0202 06:47:06.480989 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z]\\\\nI0202 06:47:06.480978 6477 services_controller.go:451] Built service openshift-kube-controller-manager/kube-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:47:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.633368 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.651740 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.663018 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.663071 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.663089 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.663114 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.663132 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:15Z","lastTransitionTime":"2026-02-02T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.669173 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.690456 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.709988 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.729814 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.744084 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b888f8bf-78c9-4e73-bfa5-521f549b345e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bea776dbb154f5435006d46f8f410c0b0cb8c955f594cf39e4b707d4d99e619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356ee9ccf90dd6a4aade1846889e97e195457f8a54c572eb8c8fd216fb5315f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f2d3d4011b1076ea5c6892ec39059c3c43c73860bae0828cd0fa3b2c86cccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.763943 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.766103 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.766392 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.766614 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.766853 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.767048 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:15Z","lastTransitionTime":"2026-02-02T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.869619 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.869654 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.869665 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.869681 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.869693 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:15Z","lastTransitionTime":"2026-02-02T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.972389 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.972417 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.972425 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.972438 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.972446 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:15Z","lastTransitionTime":"2026-02-02T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.075057 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.075338 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.075411 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.075488 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.075561 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:16Z","lastTransitionTime":"2026-02-02T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.179096 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.179162 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.179180 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.179204 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.179247 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:16Z","lastTransitionTime":"2026-02-02T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.282692 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.282754 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.282773 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.282800 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.282819 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:16Z","lastTransitionTime":"2026-02-02T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.386262 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.386558 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.386699 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.386817 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.386924 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:16Z","lastTransitionTime":"2026-02-02T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.405604 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 08:17:01.118713144 +0000 UTC Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.432947 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:16 crc kubenswrapper[4842]: E0202 06:47:16.433208 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.489834 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.489902 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.489919 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.489946 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.489964 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:16Z","lastTransitionTime":"2026-02-02T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.593875 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.593944 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.593969 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.594001 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.594026 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:16Z","lastTransitionTime":"2026-02-02T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.655156 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.655288 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.655309 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.655340 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.655366 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:16Z","lastTransitionTime":"2026-02-02T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:16 crc kubenswrapper[4842]: E0202 06:47:16.678346 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:16Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.684729 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.684811 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.684836 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.684869 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.684893 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:16Z","lastTransitionTime":"2026-02-02T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:16 crc kubenswrapper[4842]: E0202 06:47:16.707331 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:16Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.713783 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.713842 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.713856 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.713880 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.713894 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:16Z","lastTransitionTime":"2026-02-02T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:16 crc kubenswrapper[4842]: E0202 06:47:16.733883 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:16Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.740460 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.740532 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.740550 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.740576 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.740592 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:16Z","lastTransitionTime":"2026-02-02T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:16 crc kubenswrapper[4842]: E0202 06:47:16.758573 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:16Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.763542 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.763594 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.763610 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.763636 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.763655 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:16Z","lastTransitionTime":"2026-02-02T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:16 crc kubenswrapper[4842]: E0202 06:47:16.785243 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:16Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:16 crc kubenswrapper[4842]: E0202 06:47:16.785454 4842 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.788059 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.788120 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.788135 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.788157 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.788175 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:16Z","lastTransitionTime":"2026-02-02T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.893818 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.893880 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.893892 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.893918 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.893934 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:16Z","lastTransitionTime":"2026-02-02T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.997391 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.997454 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.997480 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.997515 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.997533 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:16Z","lastTransitionTime":"2026-02-02T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.100286 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.100341 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.100359 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.100382 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.100399 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:17Z","lastTransitionTime":"2026-02-02T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.204471 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.204535 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.204553 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.204578 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.204599 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:17Z","lastTransitionTime":"2026-02-02T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.307899 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.307977 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.307995 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.308033 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.308052 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:17Z","lastTransitionTime":"2026-02-02T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.406303 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 19:28:52.480184415 +0000 UTC Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.411132 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.411206 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.411255 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.411284 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.411304 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:17Z","lastTransitionTime":"2026-02-02T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.433684 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.433800 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:17 crc kubenswrapper[4842]: E0202 06:47:17.433888 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:17 crc kubenswrapper[4842]: E0202 06:47:17.434328 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.434466 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:17 crc kubenswrapper[4842]: E0202 06:47:17.434607 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.514737 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.514797 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.514808 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.514828 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.514842 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:17Z","lastTransitionTime":"2026-02-02T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.618048 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.618123 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.618136 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.618159 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.618175 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:17Z","lastTransitionTime":"2026-02-02T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.720876 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.720916 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.720925 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.720949 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.720958 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:17Z","lastTransitionTime":"2026-02-02T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.823508 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.823563 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.823572 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.823588 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.823597 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:17Z","lastTransitionTime":"2026-02-02T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.926172 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.926251 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.926264 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.926285 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.926301 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:17Z","lastTransitionTime":"2026-02-02T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.030439 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.030527 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.030555 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.030589 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.030613 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:18Z","lastTransitionTime":"2026-02-02T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.133524 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.133608 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.133627 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.133657 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.133677 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:18Z","lastTransitionTime":"2026-02-02T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.237200 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.237370 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.237393 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.237461 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.237483 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:18Z","lastTransitionTime":"2026-02-02T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.340859 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.340935 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.340954 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.340985 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.341004 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:18Z","lastTransitionTime":"2026-02-02T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.407302 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 09:43:54.989643157 +0000 UTC Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.432732 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:18 crc kubenswrapper[4842]: E0202 06:47:18.432946 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.444159 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.444234 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.444243 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.444260 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.444269 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:18Z","lastTransitionTime":"2026-02-02T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.552106 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.552172 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.552212 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.552282 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.552308 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:18Z","lastTransitionTime":"2026-02-02T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.656445 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.656514 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.656532 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.656597 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.656618 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:18Z","lastTransitionTime":"2026-02-02T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.759720 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.759785 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.759807 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.759833 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.759852 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:18Z","lastTransitionTime":"2026-02-02T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.862980 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.863045 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.863067 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.863095 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.863117 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:18Z","lastTransitionTime":"2026-02-02T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.966537 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.966597 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.966618 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.966640 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.966657 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:18Z","lastTransitionTime":"2026-02-02T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.069934 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.069993 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.070015 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.070043 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.070064 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:19Z","lastTransitionTime":"2026-02-02T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.173338 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.173390 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.173410 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.173437 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.173460 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:19Z","lastTransitionTime":"2026-02-02T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.276156 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.276245 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.276258 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.276287 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.276304 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:19Z","lastTransitionTime":"2026-02-02T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.380180 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.380300 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.380508 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.380557 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.380583 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:19Z","lastTransitionTime":"2026-02-02T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.408279 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 09:38:27.123488053 +0000 UTC Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.433054 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.433316 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.433572 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:19 crc kubenswrapper[4842]: E0202 06:47:19.433633 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:19 crc kubenswrapper[4842]: E0202 06:47:19.433737 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:19 crc kubenswrapper[4842]: E0202 06:47:19.433517 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.483626 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.483719 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.483742 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.483773 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.483799 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:19Z","lastTransitionTime":"2026-02-02T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.586811 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.586884 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.586904 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.586937 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.586959 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:19Z","lastTransitionTime":"2026-02-02T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.689849 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.689893 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.689908 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.689930 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.689943 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:19Z","lastTransitionTime":"2026-02-02T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.792617 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.792833 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.792852 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.792881 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.792899 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:19Z","lastTransitionTime":"2026-02-02T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.896450 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.896805 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.896958 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.897104 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.897259 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:19Z","lastTransitionTime":"2026-02-02T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.999598 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.999635 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.999644 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.999661 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.999672 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:19Z","lastTransitionTime":"2026-02-02T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.101998 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.102256 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.102360 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.102425 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.102481 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:20Z","lastTransitionTime":"2026-02-02T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.205956 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.206203 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.206295 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.206374 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.206475 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:20Z","lastTransitionTime":"2026-02-02T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.308904 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.308961 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.308972 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.308988 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.308998 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:20Z","lastTransitionTime":"2026-02-02T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.408943 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 11:40:21.27410297 +0000 UTC Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.411988 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.412040 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.412053 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.412072 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.412084 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:20Z","lastTransitionTime":"2026-02-02T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.433347 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:20 crc kubenswrapper[4842]: E0202 06:47:20.433529 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.514946 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.515063 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.515082 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.515111 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.515131 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:20Z","lastTransitionTime":"2026-02-02T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.618435 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.618504 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.618525 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.618552 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.618572 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:20Z","lastTransitionTime":"2026-02-02T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.720730 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.720821 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.720846 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.720880 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.720903 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:20Z","lastTransitionTime":"2026-02-02T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.823741 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.823796 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.823813 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.823837 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.823854 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:20Z","lastTransitionTime":"2026-02-02T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.926400 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.926642 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.926733 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.926799 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.926853 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:20Z","lastTransitionTime":"2026-02-02T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.029748 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.029803 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.029815 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.029834 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.029847 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:21Z","lastTransitionTime":"2026-02-02T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.132766 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.133061 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.133128 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.133207 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.133313 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:21Z","lastTransitionTime":"2026-02-02T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.236815 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.237120 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.237189 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.237297 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.237359 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:21Z","lastTransitionTime":"2026-02-02T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.339951 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.339992 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.340005 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.340023 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.340036 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:21Z","lastTransitionTime":"2026-02-02T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.409816 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 13:57:59.287310811 +0000 UTC Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.433148 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.433148 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:21 crc kubenswrapper[4842]: E0202 06:47:21.433308 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:21 crc kubenswrapper[4842]: E0202 06:47:21.433389 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.433386 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:21 crc kubenswrapper[4842]: E0202 06:47:21.433475 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.441799 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.441921 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.441998 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.442071 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.442129 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:21Z","lastTransitionTime":"2026-02-02T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.545286 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.545604 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.545681 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.545762 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.545839 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:21Z","lastTransitionTime":"2026-02-02T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.648772 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.648826 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.648843 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.648870 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.648892 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:21Z","lastTransitionTime":"2026-02-02T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.752052 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.752119 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.752132 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.752155 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.752171 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:21Z","lastTransitionTime":"2026-02-02T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.854796 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.854844 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.854862 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.854887 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.854906 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:21Z","lastTransitionTime":"2026-02-02T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.957321 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.957369 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.957382 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.957397 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.957409 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:21Z","lastTransitionTime":"2026-02-02T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.060090 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.060172 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.060194 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.060239 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.060255 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:22Z","lastTransitionTime":"2026-02-02T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.162793 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.162851 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.162860 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.162879 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.162889 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:22Z","lastTransitionTime":"2026-02-02T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.264992 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.265060 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.265079 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.265100 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.265112 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:22Z","lastTransitionTime":"2026-02-02T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.367692 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.367729 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.367739 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.367752 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.367763 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:22Z","lastTransitionTime":"2026-02-02T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.410386 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 06:25:13.646848527 +0000 UTC Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.432909 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:22 crc kubenswrapper[4842]: E0202 06:47:22.433394 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.433610 4842 scope.go:117] "RemoveContainer" containerID="d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951" Feb 02 06:47:22 crc kubenswrapper[4842]: E0202 06:47:22.433960 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\"" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.470946 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.470998 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.471010 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.471029 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.471046 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:22Z","lastTransitionTime":"2026-02-02T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.573946 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.573995 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.574006 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.574024 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.574036 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:22Z","lastTransitionTime":"2026-02-02T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.677986 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.678324 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.678421 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.678520 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.678611 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:22Z","lastTransitionTime":"2026-02-02T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.781827 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.782135 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.782210 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.782297 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.782359 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:22Z","lastTransitionTime":"2026-02-02T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.885007 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.885353 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.885451 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.885542 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.885646 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:22Z","lastTransitionTime":"2026-02-02T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.988296 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.988626 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.988754 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.988854 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.988954 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:22Z","lastTransitionTime":"2026-02-02T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.091873 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.091929 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.091942 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.091965 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.091981 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:23Z","lastTransitionTime":"2026-02-02T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.157932 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs\") pod \"network-metrics-daemon-9chjr\" (UID: \"4f6c3b51-669c-4c7b-a23a-ed68d139849e\") " pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:23 crc kubenswrapper[4842]: E0202 06:47:23.158268 4842 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 06:47:23 crc kubenswrapper[4842]: E0202 06:47:23.158558 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs podName:4f6c3b51-669c-4c7b-a23a-ed68d139849e nodeName:}" failed. No retries permitted until 2026-02-02 06:47:55.158528284 +0000 UTC m=+100.535796186 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs") pod "network-metrics-daemon-9chjr" (UID: "4f6c3b51-669c-4c7b-a23a-ed68d139849e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.195263 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.195604 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.195741 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.195892 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.196017 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:23Z","lastTransitionTime":"2026-02-02T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.306367 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.306432 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.306449 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.306474 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.306496 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:23Z","lastTransitionTime":"2026-02-02T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.409064 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.409134 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.409154 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.409184 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.409203 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:23Z","lastTransitionTime":"2026-02-02T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.411119 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 04:02:28.346541564 +0000 UTC Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.432677 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.432729 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:23 crc kubenswrapper[4842]: E0202 06:47:23.432880 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.432911 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:23 crc kubenswrapper[4842]: E0202 06:47:23.433299 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:23 crc kubenswrapper[4842]: E0202 06:47:23.433147 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.511994 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.512027 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.512037 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.512054 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.512066 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:23Z","lastTransitionTime":"2026-02-02T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.615312 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.615357 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.615367 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.615382 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.615391 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:23Z","lastTransitionTime":"2026-02-02T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.718598 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.718642 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.718656 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.718673 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.718683 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:23Z","lastTransitionTime":"2026-02-02T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.821934 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.821984 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.821994 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.822012 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.822024 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:23Z","lastTransitionTime":"2026-02-02T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.924731 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.924815 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.924828 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.924861 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.924893 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:23Z","lastTransitionTime":"2026-02-02T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.027253 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.027324 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.027345 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.027374 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.027392 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:24Z","lastTransitionTime":"2026-02-02T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.130762 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.131095 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.131105 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.131121 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.131131 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:24Z","lastTransitionTime":"2026-02-02T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.234524 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.234584 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.234595 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.234615 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.234889 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:24Z","lastTransitionTime":"2026-02-02T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.337792 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.337856 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.337869 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.337891 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.338342 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:24Z","lastTransitionTime":"2026-02-02T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.411616 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 05:41:50.626416011 +0000 UTC Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.433340 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:24 crc kubenswrapper[4842]: E0202 06:47:24.433573 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.440365 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.440406 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.440417 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.440433 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.440444 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:24Z","lastTransitionTime":"2026-02-02T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.543167 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.543240 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.543252 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.543270 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.543280 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:24Z","lastTransitionTime":"2026-02-02T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.646168 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.646268 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.646279 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.646298 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.646311 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:24Z","lastTransitionTime":"2026-02-02T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.749688 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.749747 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.749766 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.749794 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.749813 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:24Z","lastTransitionTime":"2026-02-02T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.852391 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.852437 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.852449 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.852468 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.852483 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:24Z","lastTransitionTime":"2026-02-02T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.955134 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.955168 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.955180 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.955195 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.955205 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:24Z","lastTransitionTime":"2026-02-02T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.969068 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gmkx9_c1fd21cd-ea6a-44a0-b136-f338fc97cf18/kube-multus/0.log" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.969140 4842 generic.go:334] "Generic (PLEG): container finished" podID="c1fd21cd-ea6a-44a0-b136-f338fc97cf18" containerID="8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d" exitCode=1 Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.969183 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gmkx9" event={"ID":"c1fd21cd-ea6a-44a0-b136-f338fc97cf18","Type":"ContainerDied","Data":"8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d"} Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.969842 4842 scope.go:117] "RemoveContainer" containerID="8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.986015 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:24Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.999623 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:24Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.012614 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.023631 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.042082 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.055644 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.057301 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.057399 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.057424 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.057459 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.057480 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:25Z","lastTransitionTime":"2026-02-02T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.067296 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.085550 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.101816 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.116040 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.137103 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"F0202 06:47:06.480989 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z]\\\\nI0202 06:47:06.480978 6477 services_controller.go:451] Built service openshift-kube-controller-manager/kube-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:47:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.150874 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b888f8bf-78c9-4e73-bfa5-521f549b345e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bea776dbb154f5435006d46f8f410c0b0cb8c955f594cf39e4b707d4d99e619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356ee9ccf90dd6a4aade1846889e97e195457f8a54c572eb8c8fd216fb5315f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f2d3d4011b1076ea5c6892ec39059c3c43c73860bae0828cd0fa3b2c86cccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.160639 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.160671 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.160682 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.160704 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.160719 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:25Z","lastTransitionTime":"2026-02-02T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.164797 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.177136 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.191363 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.206286 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.223801 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:24Z\\\",\\\"message\\\":\\\"2026-02-02T06:46:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2\\\\n2026-02-02T06:46:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2 to /host/opt/cni/bin/\\\\n2026-02-02T06:46:39Z [verbose] multus-daemon started\\\\n2026-02-02T06:46:39Z [verbose] Readiness Indicator file check\\\\n2026-02-02T06:47:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.264072 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.264129 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.264141 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.264163 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.264177 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:25Z","lastTransitionTime":"2026-02-02T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.367066 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.367107 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.367116 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.367135 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.367146 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:25Z","lastTransitionTime":"2026-02-02T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.412629 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 14:02:53.413457016 +0000 UTC Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.433157 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.433305 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:25 crc kubenswrapper[4842]: E0202 06:47:25.433337 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.433422 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:25 crc kubenswrapper[4842]: E0202 06:47:25.433571 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:25 crc kubenswrapper[4842]: E0202 06:47:25.433789 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.453178 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.469852 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.471284 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.471338 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.471353 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.471374 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.471393 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:25Z","lastTransitionTime":"2026-02-02T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.498868 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"F0202 06:47:06.480989 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z]\\\\nI0202 06:47:06.480978 6477 services_controller.go:451] Built service openshift-kube-controller-manager/kube-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:47:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.518353 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.530983 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.541514 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.560450 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.571560 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b888f8bf-78c9-4e73-bfa5-521f549b345e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bea776dbb154f5435006d46f8f410c0b0cb8c955f594cf39e4b707d4d99e619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356ee9ccf90dd6a4aade1846889e97e195457f8a54c572eb8c8fd216fb5315f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f2d3d4011b1076ea5c6892ec39059c3c43c73860bae0828cd0fa3b2c86cccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.577465 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.577526 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.577545 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.577571 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.577590 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:25Z","lastTransitionTime":"2026-02-02T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.589886 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.602287 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.616008 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.629706 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:24Z\\\",\\\"message\\\":\\\"2026-02-02T06:46:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2\\\\n2026-02-02T06:46:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2 to /host/opt/cni/bin/\\\\n2026-02-02T06:46:39Z [verbose] multus-daemon started\\\\n2026-02-02T06:46:39Z [verbose] Readiness Indicator file check\\\\n2026-02-02T06:47:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.643912 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.653518 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.664195 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.676937 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.684698 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.684783 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.684809 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.684842 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.684864 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:25Z","lastTransitionTime":"2026-02-02T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.690401 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.788353 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.788443 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.788461 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.788490 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.788508 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:25Z","lastTransitionTime":"2026-02-02T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.891153 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.891206 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.891252 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.891277 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.891294 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:25Z","lastTransitionTime":"2026-02-02T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.975566 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gmkx9_c1fd21cd-ea6a-44a0-b136-f338fc97cf18/kube-multus/0.log" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.975657 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gmkx9" event={"ID":"c1fd21cd-ea6a-44a0-b136-f338fc97cf18","Type":"ContainerStarted","Data":"eb46ef51b68530b7f2b8f5c7e049ebba4820dd4f4f0a8efd0feba8f483ed768d"} Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.994348 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.994529 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.994581 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.994592 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.994614 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.994627 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:25Z","lastTransitionTime":"2026-02-02T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.009198 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:26Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.027498 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"F0202 06:47:06.480989 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z]\\\\nI0202 06:47:06.480978 6477 services_controller.go:451] Built service openshift-kube-controller-manager/kube-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:47:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:26Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.049906 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:26Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.066092 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:26Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.079180 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:26Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.093409 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:26Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.098192 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.098312 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.098371 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.098456 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.098528 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:26Z","lastTransitionTime":"2026-02-02T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.106261 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b888f8bf-78c9-4e73-bfa5-521f549b345e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bea776dbb154f5435006d46f8f410c0b0cb8c955f594cf39e4b707d4d99e619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356ee9ccf90dd6a4aade1846889e97e195457f8a54c572eb8c8fd216fb5315f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f2d3d4011b1076ea5c6892ec39059c3c43c73860bae0828cd0fa3b2c86cccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:26Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.119013 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:26Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.136098 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:26Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.153645 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:26Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.171079 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ef51b68530b7f2b8f5c7e049ebba4820dd4f4f0a8efd0feba8f483ed768d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:24Z\\\",\\\"message\\\":\\\"2026-02-02T06:46:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2\\\\n2026-02-02T06:46:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2 to /host/opt/cni/bin/\\\\n2026-02-02T06:46:39Z [verbose] multus-daemon started\\\\n2026-02-02T06:46:39Z [verbose] Readiness Indicator file check\\\\n2026-02-02T06:47:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:26Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.187041 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:26Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.201678 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.202046 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.202287 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.202454 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.202340 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:26Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.202618 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:26Z","lastTransitionTime":"2026-02-02T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.215679 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:26Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.233647 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:26Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.246949 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:26Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.305482 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.305529 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.305547 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.305571 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.305587 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:26Z","lastTransitionTime":"2026-02-02T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.408886 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.408955 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.408978 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.409013 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.409034 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:26Z","lastTransitionTime":"2026-02-02T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.414089 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 01:17:07.757159976 +0000 UTC Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.432711 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:26 crc kubenswrapper[4842]: E0202 06:47:26.433097 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.511076 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.511175 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.511451 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.511511 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.511529 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:26Z","lastTransitionTime":"2026-02-02T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.614941 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.615109 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.615251 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.615367 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.615489 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:26Z","lastTransitionTime":"2026-02-02T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.717724 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.717841 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.717947 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.718076 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.718159 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:26Z","lastTransitionTime":"2026-02-02T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.820507 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.820772 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.820852 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.820968 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.821046 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:26Z","lastTransitionTime":"2026-02-02T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.923410 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.923652 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.923733 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.923825 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.923904 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:26Z","lastTransitionTime":"2026-02-02T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.995990 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.996037 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.996049 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.996066 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.996079 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:26Z","lastTransitionTime":"2026-02-02T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:27 crc kubenswrapper[4842]: E0202 06:47:27.008046 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:27Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.017683 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.017922 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.018024 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.018099 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.018159 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:27Z","lastTransitionTime":"2026-02-02T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:27 crc kubenswrapper[4842]: E0202 06:47:27.031083 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:27Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.034630 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.034674 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.034683 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.034701 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.034712 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:27Z","lastTransitionTime":"2026-02-02T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:27 crc kubenswrapper[4842]: E0202 06:47:27.046413 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:27Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.050458 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.050496 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.050506 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.050520 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.050531 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:27Z","lastTransitionTime":"2026-02-02T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:27 crc kubenswrapper[4842]: E0202 06:47:27.062321 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:27Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.066163 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.066207 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.066239 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.066260 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.066276 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:27Z","lastTransitionTime":"2026-02-02T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:27 crc kubenswrapper[4842]: E0202 06:47:27.082347 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:27Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:27 crc kubenswrapper[4842]: E0202 06:47:27.082485 4842 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.084549 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.084580 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.084590 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.084604 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.084615 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:27Z","lastTransitionTime":"2026-02-02T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.187971 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.188053 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.188079 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.188113 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.188137 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:27Z","lastTransitionTime":"2026-02-02T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.292046 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.292106 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.292119 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.292138 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.292150 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:27Z","lastTransitionTime":"2026-02-02T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.394472 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.394551 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.394607 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.394642 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.394667 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:27Z","lastTransitionTime":"2026-02-02T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.416149 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 15:24:40.676097321 +0000 UTC Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.433511 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.433599 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.433728 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:27 crc kubenswrapper[4842]: E0202 06:47:27.433719 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:27 crc kubenswrapper[4842]: E0202 06:47:27.435925 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:27 crc kubenswrapper[4842]: E0202 06:47:27.435859 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.498325 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.498394 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.498412 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.498441 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.498460 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:27Z","lastTransitionTime":"2026-02-02T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.601132 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.601174 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.601189 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.601212 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.601255 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:27Z","lastTransitionTime":"2026-02-02T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.703139 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.703211 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.703261 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.703319 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.703339 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:27Z","lastTransitionTime":"2026-02-02T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.806319 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.806369 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.806380 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.806397 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.806407 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:27Z","lastTransitionTime":"2026-02-02T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.909758 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.909799 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.909813 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.909830 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.909840 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:27Z","lastTransitionTime":"2026-02-02T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.012268 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.012319 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.012337 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.012361 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.012379 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:28Z","lastTransitionTime":"2026-02-02T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.114908 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.115019 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.115045 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.115076 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.115100 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:28Z","lastTransitionTime":"2026-02-02T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.218185 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.218273 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.218292 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.218316 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.218337 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:28Z","lastTransitionTime":"2026-02-02T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.320522 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.320583 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.320606 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.320633 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.320653 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:28Z","lastTransitionTime":"2026-02-02T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.416920 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 01:39:31.884751975 +0000 UTC Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.422976 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.423047 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.423081 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.423139 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.423167 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:28Z","lastTransitionTime":"2026-02-02T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.433437 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:28 crc kubenswrapper[4842]: E0202 06:47:28.433624 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.525621 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.525712 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.525734 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.525765 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.525791 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:28Z","lastTransitionTime":"2026-02-02T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.628898 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.628944 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.628959 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.628978 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.628992 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:28Z","lastTransitionTime":"2026-02-02T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.732039 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.732123 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.732145 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.732177 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.732197 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:28Z","lastTransitionTime":"2026-02-02T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.835817 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.835877 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.835890 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.835909 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.835923 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:28Z","lastTransitionTime":"2026-02-02T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.942114 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.942198 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.942297 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.942330 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.942359 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:28Z","lastTransitionTime":"2026-02-02T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.046828 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.046893 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.046910 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.046932 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.046948 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:29Z","lastTransitionTime":"2026-02-02T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.149610 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.149690 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.149708 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.149789 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.149822 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:29Z","lastTransitionTime":"2026-02-02T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.254063 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.254142 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.254161 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.254267 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.254299 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:29Z","lastTransitionTime":"2026-02-02T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.357242 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.357302 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.357320 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.357347 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.357368 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:29Z","lastTransitionTime":"2026-02-02T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.418075 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 14:19:13.574913351 +0000 UTC Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.432679 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.432756 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:29 crc kubenswrapper[4842]: E0202 06:47:29.432847 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.432719 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:29 crc kubenswrapper[4842]: E0202 06:47:29.432950 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:29 crc kubenswrapper[4842]: E0202 06:47:29.433159 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.461273 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.461332 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.461350 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.461435 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.461463 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:29Z","lastTransitionTime":"2026-02-02T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.564569 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.564798 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.564862 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.564923 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.564976 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:29Z","lastTransitionTime":"2026-02-02T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.667873 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.668034 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.668093 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.668152 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.668231 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:29Z","lastTransitionTime":"2026-02-02T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.771024 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.771116 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.771147 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.771169 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.771181 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:29Z","lastTransitionTime":"2026-02-02T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.874136 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.874467 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.874656 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.874807 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.874929 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:29Z","lastTransitionTime":"2026-02-02T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.977089 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.977151 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.977169 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.977193 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.977212 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:29Z","lastTransitionTime":"2026-02-02T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.079910 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.080076 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.080135 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.080205 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.080311 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:30Z","lastTransitionTime":"2026-02-02T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.182984 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.183089 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.183148 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.183202 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.183276 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:30Z","lastTransitionTime":"2026-02-02T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.287174 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.287282 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.287303 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.287331 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.287349 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:30Z","lastTransitionTime":"2026-02-02T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.389927 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.389998 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.390023 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.390050 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.390073 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:30Z","lastTransitionTime":"2026-02-02T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.418202 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 23:45:09.027383852 +0000 UTC Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.433250 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:30 crc kubenswrapper[4842]: E0202 06:47:30.433422 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.492896 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.493014 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.493195 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.493358 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.493514 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:30Z","lastTransitionTime":"2026-02-02T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.595938 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.596064 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.596120 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.596173 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.596261 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:30Z","lastTransitionTime":"2026-02-02T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.698949 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.699240 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.699433 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.699609 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.699754 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:30Z","lastTransitionTime":"2026-02-02T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.802798 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.802861 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.802881 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.802910 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.802930 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:30Z","lastTransitionTime":"2026-02-02T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.906264 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.906563 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.906711 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.906839 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.906958 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:30Z","lastTransitionTime":"2026-02-02T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.010929 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.010991 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.011008 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.011032 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.011050 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:31Z","lastTransitionTime":"2026-02-02T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.114716 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.114776 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.114795 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.114827 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.114847 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:31Z","lastTransitionTime":"2026-02-02T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.219015 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.219453 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.219674 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.219862 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.220046 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:31Z","lastTransitionTime":"2026-02-02T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.323172 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.323578 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.323808 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.324038 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.324490 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:31Z","lastTransitionTime":"2026-02-02T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.419101 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 03:51:44.507988361 +0000 UTC Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.427991 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.428029 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.428043 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.428067 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.428079 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:31Z","lastTransitionTime":"2026-02-02T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.435095 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:31 crc kubenswrapper[4842]: E0202 06:47:31.435477 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.435341 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.435601 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:31 crc kubenswrapper[4842]: E0202 06:47:31.436283 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:31 crc kubenswrapper[4842]: E0202 06:47:31.436468 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.530920 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.531304 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.531453 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.531644 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.531772 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:31Z","lastTransitionTime":"2026-02-02T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.635020 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.635094 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.635115 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.635143 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.635165 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:31Z","lastTransitionTime":"2026-02-02T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.738672 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.738722 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.738740 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.738763 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.738779 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:31Z","lastTransitionTime":"2026-02-02T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.841528 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.841625 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.841670 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.841704 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.841729 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:31Z","lastTransitionTime":"2026-02-02T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.945929 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.945997 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.946020 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.946054 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.946080 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:31Z","lastTransitionTime":"2026-02-02T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.049375 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.049462 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.049482 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.049987 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.050053 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:32Z","lastTransitionTime":"2026-02-02T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.153715 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.154106 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.154295 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.154502 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.154608 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:32Z","lastTransitionTime":"2026-02-02T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.258010 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.258053 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.258064 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.258081 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.258093 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:32Z","lastTransitionTime":"2026-02-02T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.361652 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.361704 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.361716 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.361741 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.361757 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:32Z","lastTransitionTime":"2026-02-02T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.420326 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 21:10:35.244749299 +0000 UTC Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.432700 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:32 crc kubenswrapper[4842]: E0202 06:47:32.432905 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.464918 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.465330 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.465491 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.465647 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.465807 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:32Z","lastTransitionTime":"2026-02-02T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.568987 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.569034 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.569044 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.569060 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.569070 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:32Z","lastTransitionTime":"2026-02-02T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.672207 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.672295 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.672311 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.672335 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.672353 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:32Z","lastTransitionTime":"2026-02-02T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.774461 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.774522 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.774548 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.774576 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.774599 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:32Z","lastTransitionTime":"2026-02-02T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.878504 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.878579 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.878599 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.878629 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.878650 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:32Z","lastTransitionTime":"2026-02-02T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.981568 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.981641 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.981663 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.981694 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.981715 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:32Z","lastTransitionTime":"2026-02-02T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.084313 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.084377 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.084394 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.084421 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.084448 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:33Z","lastTransitionTime":"2026-02-02T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.187416 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.187686 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.187847 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.187995 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.188133 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:33Z","lastTransitionTime":"2026-02-02T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.291776 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.292261 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.292524 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.292741 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.292895 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:33Z","lastTransitionTime":"2026-02-02T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.396202 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.396308 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.396337 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.396367 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.396390 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:33Z","lastTransitionTime":"2026-02-02T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.420501 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 06:55:01.981234801 +0000 UTC Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.433022 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:33 crc kubenswrapper[4842]: E0202 06:47:33.433833 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.433088 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.433052 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:33 crc kubenswrapper[4842]: E0202 06:47:33.434388 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:33 crc kubenswrapper[4842]: E0202 06:47:33.434446 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.450401 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.499418 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.499475 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.499492 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.499516 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.499536 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:33Z","lastTransitionTime":"2026-02-02T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.603129 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.603243 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.603265 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.603292 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.603314 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:33Z","lastTransitionTime":"2026-02-02T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.713780 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.713850 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.713869 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.713925 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.713944 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:33Z","lastTransitionTime":"2026-02-02T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.817305 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.817691 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.817854 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.818048 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.818200 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:33Z","lastTransitionTime":"2026-02-02T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.921352 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.921794 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.922063 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.922318 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.922548 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:33Z","lastTransitionTime":"2026-02-02T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.025659 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.025722 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.025734 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.025750 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.025785 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:34Z","lastTransitionTime":"2026-02-02T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.128760 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.128825 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.129032 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.129048 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.129058 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:34Z","lastTransitionTime":"2026-02-02T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.232273 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.232353 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.232375 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.232406 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.232428 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:34Z","lastTransitionTime":"2026-02-02T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.336066 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.336145 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.336166 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.336194 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.336213 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:34Z","lastTransitionTime":"2026-02-02T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.421612 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 21:06:00.891389633 +0000 UTC Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.433369 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:34 crc kubenswrapper[4842]: E0202 06:47:34.433578 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.439658 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.439718 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.439744 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.439775 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.439800 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:34Z","lastTransitionTime":"2026-02-02T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.542798 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.542876 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.542892 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.542917 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.542934 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:34Z","lastTransitionTime":"2026-02-02T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.646776 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.646817 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.646825 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.646844 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.646855 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:34Z","lastTransitionTime":"2026-02-02T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.749191 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.749275 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.749295 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.749371 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.749395 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:34Z","lastTransitionTime":"2026-02-02T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.853687 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.853755 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.853773 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.853797 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.853815 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:34Z","lastTransitionTime":"2026-02-02T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.956619 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.957014 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.957173 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.957387 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.957545 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:34Z","lastTransitionTime":"2026-02-02T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.061304 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.061662 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.061841 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.061997 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.062159 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:35Z","lastTransitionTime":"2026-02-02T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.165794 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.165865 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.165885 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.165911 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.165929 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:35Z","lastTransitionTime":"2026-02-02T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.269173 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.269598 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.269742 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.269898 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.270040 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:35Z","lastTransitionTime":"2026-02-02T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.373181 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.373329 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.373350 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.373377 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.373395 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:35Z","lastTransitionTime":"2026-02-02T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.422141 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 16:05:38.586386205 +0000 UTC Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.433049 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.433093 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:35 crc kubenswrapper[4842]: E0202 06:47:35.433415 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.433452 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:35 crc kubenswrapper[4842]: E0202 06:47:35.434171 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:35 crc kubenswrapper[4842]: E0202 06:47:35.434460 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.453756 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b888f8bf-78c9-4e73-bfa5-521f549b345e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bea776dbb154f5435006d46f8f410c0b0cb8c955f594cf39e4b707d4d99e619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356ee9ccf90dd6a4aade1846889e97e195457f8a54c572eb8c8fd216fb5315f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f2d3d4011b1076ea5c6892ec39059c3c43c73860bae0828cd0fa3b2c86cccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.474750 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.477654 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.477712 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.477732 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.477760 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.477811 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:35Z","lastTransitionTime":"2026-02-02T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.490932 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cdf7907-fc51-4fc8-8cd3-5a90a72cc0e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e0a8355ba524fc2aaaf4ceb6c28d2560fcc506a7159f80193563692812f3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eedd0bd7e5b861fdac2d584e9a2854d8936e487a22fbee9364b4203fc22d1205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eedd0bd7e5b861fdac2d584e9a2854d8936e487a22fbee9364b4203fc22d1205\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.509242 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.529183 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.556619 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.580196 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ef51b68530b7f2b8f5c7e049ebba4820dd4f4f0a8efd0feba8f483ed768d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:24Z\\\",\\\"message\\\":\\\"2026-02-02T06:46:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2\\\\n2026-02-02T06:46:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2 to /host/opt/cni/bin/\\\\n2026-02-02T06:46:39Z [verbose] multus-daemon started\\\\n2026-02-02T06:46:39Z [verbose] Readiness Indicator file check\\\\n2026-02-02T06:47:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.582705 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.582750 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.582768 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.582793 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.582813 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:35Z","lastTransitionTime":"2026-02-02T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.601550 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.621842 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.639092 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.654729 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.678097 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.686514 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.686593 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.686606 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.686628 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.687255 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:35Z","lastTransitionTime":"2026-02-02T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.696908 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.711918 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.726825 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.744712 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.763500 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.787408 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"F0202 06:47:06.480989 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z]\\\\nI0202 06:47:06.480978 6477 services_controller.go:451] Built service openshift-kube-controller-manager/kube-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:47:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.790581 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.790638 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.790651 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.790669 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.790681 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:35Z","lastTransitionTime":"2026-02-02T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.893909 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.893952 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.893961 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.893975 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.893985 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:35Z","lastTransitionTime":"2026-02-02T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.997509 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.997947 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.998149 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.998360 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.998507 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:35Z","lastTransitionTime":"2026-02-02T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.102417 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.102929 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.103024 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.103123 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.103271 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:36Z","lastTransitionTime":"2026-02-02T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.206554 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.206600 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.206610 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.206627 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.206639 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:36Z","lastTransitionTime":"2026-02-02T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.310150 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.310633 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.310652 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.310682 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.310699 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:36Z","lastTransitionTime":"2026-02-02T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.413061 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.413138 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.413155 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.413181 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.413199 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:36Z","lastTransitionTime":"2026-02-02T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.423261 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 23:38:02.626231697 +0000 UTC Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.432952 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:36 crc kubenswrapper[4842]: E0202 06:47:36.433655 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.434056 4842 scope.go:117] "RemoveContainer" containerID="d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.517715 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.517799 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.517826 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.517860 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.517896 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:36Z","lastTransitionTime":"2026-02-02T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.620822 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.620858 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.621066 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.621091 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.621101 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:36Z","lastTransitionTime":"2026-02-02T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.724011 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.724069 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.724085 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.724113 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.724131 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:36Z","lastTransitionTime":"2026-02-02T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.826762 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.826812 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.826827 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.826848 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.826863 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:36Z","lastTransitionTime":"2026-02-02T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.930297 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.930349 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.930363 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.930383 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.930397 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:36Z","lastTransitionTime":"2026-02-02T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.021783 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovnkube-controller/2.log" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.031014 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerStarted","Data":"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716"} Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.031600 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.036299 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.036355 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.036371 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.036395 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.036515 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:37Z","lastTransitionTime":"2026-02-02T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.051911 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.065301 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.077720 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.099133 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.123176 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.138794 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.138835 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.138848 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.138870 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.138886 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:37Z","lastTransitionTime":"2026-02-02T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.139571 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.169911 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"F0202 06:47:06.480989 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z]\\\\nI0202 06:47:06.480978 6477 services_controller.go:451] Built service openshift-kube-controller-manager/kube-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:47:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.189972 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b888f8bf-78c9-4e73-bfa5-521f549b345e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bea776dbb154f5435006d46f8f410c0b0cb8c955f594cf39e4b707d4d99e619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356ee9ccf90dd6a4aade1846889e97e195457f8a54c572eb8c8fd216fb5315f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f2d3d4011b1076ea5c6892ec39059c3c43c73860bae0828cd0fa3b2c86cccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.212443 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.228167 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cdf7907-fc51-4fc8-8cd3-5a90a72cc0e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e0a8355ba524fc2aaaf4ceb6c28d2560fcc506a7159f80193563692812f3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eedd0bd7e5b861fdac2d584e9a2854d8936e487a22fbee9364b4203fc22d1205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eedd0bd7e5b861fdac2d584e9a2854d8936e487a22fbee9364b4203fc22d1205\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.241801 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.241851 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.241866 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.241887 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.241898 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:37Z","lastTransitionTime":"2026-02-02T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.243838 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.256926 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.271156 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.287404 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ef51b68530b7f2b8f5c7e049ebba4820dd4f4f0a8efd0feba8f483ed768d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:24Z\\\",\\\"message\\\":\\\"2026-02-02T06:46:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2\\\\n2026-02-02T06:46:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2 to /host/opt/cni/bin/\\\\n2026-02-02T06:46:39Z [verbose] multus-daemon started\\\\n2026-02-02T06:46:39Z [verbose] Readiness Indicator file check\\\\n2026-02-02T06:47:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.300549 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.314512 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.325197 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.336167 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.343498 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.343539 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.343554 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.343572 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.343588 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:37Z","lastTransitionTime":"2026-02-02T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.371769 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.371843 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.371852 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.371869 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.371888 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:37Z","lastTransitionTime":"2026-02-02T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:37 crc kubenswrapper[4842]: E0202 06:47:37.385202 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.390998 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.391043 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.391055 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.391120 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.391139 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:37Z","lastTransitionTime":"2026-02-02T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:37 crc kubenswrapper[4842]: E0202 06:47:37.411784 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.420569 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.420621 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.420638 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.420663 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.420680 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:37Z","lastTransitionTime":"2026-02-02T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.423639 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 18:44:52.710145001 +0000 UTC Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.432537 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:37 crc kubenswrapper[4842]: E0202 06:47:37.432731 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.433067 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:37 crc kubenswrapper[4842]: E0202 06:47:37.433160 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.433385 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:37 crc kubenswrapper[4842]: E0202 06:47:37.433477 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:37 crc kubenswrapper[4842]: E0202 06:47:37.446268 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.450480 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.450518 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.450535 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.450555 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.450571 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:37Z","lastTransitionTime":"2026-02-02T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:37 crc kubenswrapper[4842]: E0202 06:47:37.472242 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.475699 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.475740 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.475752 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.475771 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.475786 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:37Z","lastTransitionTime":"2026-02-02T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:37 crc kubenswrapper[4842]: E0202 06:47:37.494476 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: E0202 06:47:37.494600 4842 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.496169 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.496263 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.496278 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.496293 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.496304 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:37Z","lastTransitionTime":"2026-02-02T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.599392 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.599454 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.599471 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.599499 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.599515 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:37Z","lastTransitionTime":"2026-02-02T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.702113 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.702152 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.702161 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.702174 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.702184 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:37Z","lastTransitionTime":"2026-02-02T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.805071 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.805124 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.805140 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.805161 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.805178 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:37Z","lastTransitionTime":"2026-02-02T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.908728 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.908829 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.908853 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.908881 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.908899 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:37Z","lastTransitionTime":"2026-02-02T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.012096 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.012269 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.012290 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.012321 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.012343 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:38Z","lastTransitionTime":"2026-02-02T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.037182 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovnkube-controller/3.log" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.038424 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovnkube-controller/2.log" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.043542 4842 generic.go:334] "Generic (PLEG): container finished" podID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerID="72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716" exitCode=1 Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.043600 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerDied","Data":"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716"} Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.043648 4842 scope.go:117] "RemoveContainer" containerID="d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.049258 4842 scope.go:117] "RemoveContainer" containerID="72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716" Feb 02 06:47:38 crc kubenswrapper[4842]: E0202 06:47:38.049555 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\"" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.071823 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.093327 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.112114 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.115586 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.115652 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.115680 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.115711 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.115736 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:38Z","lastTransitionTime":"2026-02-02T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.148282 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"F0202 06:47:06.480989 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z]\\\\nI0202 06:47:06.480978 6477 services_controller.go:451] Built service openshift-kube-controller-manager/kube-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:47:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"ndler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 06:47:37.456333 6892 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 06:47:37.456337 6892 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 06:47:37.456374 6892 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 06:47:37.456388 6892 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 06:47:37.456407 6892 factory.go:656] Stopping watch factory\\\\nI0202 06:47:37.456419 6892 ovnkube.go:599] Stopped ovnkube\\\\nI0202 06:47:37.456444 6892 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 06:47:37.456451 6892 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 06:47:37.456458 6892 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 06:47:37.456463 6892 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 06:47:37.456473 6892 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 06:47:37.456479 6892 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 06:47:37.456485 6892 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 06:47:37.456490 6892 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 06:47:37.456499 6892 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 06:47:37.456549 6892 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.171883 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.190938 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.207552 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.218311 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.218372 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.218390 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.218414 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.218435 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:38Z","lastTransitionTime":"2026-02-02T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.228878 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.252120 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b888f8bf-78c9-4e73-bfa5-521f549b345e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bea776dbb154f5435006d46f8f410c0b0cb8c955f594cf39e4b707d4d99e619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356ee9ccf90dd6a4aade1846889e97e195457f8a54c572eb8c8fd216fb5315f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f2d3d4011b1076ea5c6892ec39059c3c43c73860bae0828cd0fa3b2c86cccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.270428 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.288339 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.304523 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.322541 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.322600 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.322618 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.322642 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.322660 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:38Z","lastTransitionTime":"2026-02-02T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.324979 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ef51b68530b7f2b8f5c7e049ebba4820dd4f4f0a8efd0feba8f483ed768d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:24Z\\\",\\\"message\\\":\\\"2026-02-02T06:46:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2\\\\n2026-02-02T06:46:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2 to /host/opt/cni/bin/\\\\n2026-02-02T06:46:39Z [verbose] multus-daemon started\\\\n2026-02-02T06:46:39Z [verbose] Readiness Indicator file check\\\\n2026-02-02T06:47:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.342121 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cdf7907-fc51-4fc8-8cd3-5a90a72cc0e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e0a8355ba524fc2aaaf4ceb6c28d2560fcc506a7159f80193563692812f3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eedd0bd7e5b861fdac2d584e9a2854d8936e487a22fbee9364b4203fc22d1205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eedd0bd7e5b861fdac2d584e9a2854d8936e487a22fbee9364b4203fc22d1205\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.358544 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.373089 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.386379 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.406834 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.423977 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 19:06:19.245982854 +0000 UTC Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.426323 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.426372 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.426390 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.426415 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.426433 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:38Z","lastTransitionTime":"2026-02-02T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.432539 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:38 crc kubenswrapper[4842]: E0202 06:47:38.432756 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.528654 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.528731 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.528751 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.528778 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.528798 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:38Z","lastTransitionTime":"2026-02-02T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.631923 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.632004 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.632033 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.632065 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.632089 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:38Z","lastTransitionTime":"2026-02-02T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.734616 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.734672 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.734689 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.734709 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.734724 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:38Z","lastTransitionTime":"2026-02-02T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.838313 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.838406 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.838428 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.838455 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.838473 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:38Z","lastTransitionTime":"2026-02-02T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.941697 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.941769 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.941788 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.941813 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.941831 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:38Z","lastTransitionTime":"2026-02-02T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.044503 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.044562 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.044578 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.044604 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.044622 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:39Z","lastTransitionTime":"2026-02-02T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.050749 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovnkube-controller/3.log" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.056354 4842 scope.go:117] "RemoveContainer" containerID="72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716" Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.056878 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\"" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.076664 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.094079 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.109720 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.128825 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.149297 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.149361 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.149379 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.149405 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.149423 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:39Z","lastTransitionTime":"2026-02-02T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.154015 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.176020 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.195307 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.225436 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"ndler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 06:47:37.456333 6892 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 06:47:37.456337 6892 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 06:47:37.456374 6892 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 06:47:37.456388 6892 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 06:47:37.456407 6892 factory.go:656] Stopping watch factory\\\\nI0202 06:47:37.456419 6892 ovnkube.go:599] Stopped ovnkube\\\\nI0202 06:47:37.456444 6892 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 06:47:37.456451 6892 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 06:47:37.456458 6892 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 06:47:37.456463 6892 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 06:47:37.456473 6892 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 06:47:37.456479 6892 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 06:47:37.456485 6892 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 06:47:37.456490 6892 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 06:47:37.456499 6892 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 06:47:37.456549 6892 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:47:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.249424 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.253115 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.253165 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.253182 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.253207 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.253272 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:39Z","lastTransitionTime":"2026-02-02T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.272907 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.277264 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.277456 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.277557 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.277637 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.277830 4842 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.277922 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 06:48:43.277890599 +0000 UTC m=+148.655158561 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.278047 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:43.278026192 +0000 UTC m=+148.655294144 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.278178 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.278209 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.278278 4842 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.278344 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 06:48:43.278323029 +0000 UTC m=+148.655590981 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.278426 4842 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.278505 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 06:48:43.278486593 +0000 UTC m=+148.655754555 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.291330 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.313115 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.332512 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b888f8bf-78c9-4e73-bfa5-521f549b345e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bea776dbb154f5435006d46f8f410c0b0cb8c955f594cf39e4b707d4d99e619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356ee9ccf90dd6a4aade1846889e97e195457f8a54c572eb8c8fd216fb5315f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f2d3d4011b1076ea5c6892ec39059c3c43c73860bae0828cd0fa3b2c86cccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.352661 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.357804 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.357882 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.357900 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.357928 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.357946 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:39Z","lastTransitionTime":"2026-02-02T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.374251 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.378326 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.378539 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.378580 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.378603 4842 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.378698 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 06:48:43.378667781 +0000 UTC m=+148.755935733 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.393115 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.417031 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ef51b68530b7f2b8f5c7e049ebba4820dd4f4f0a8efd0feba8f483ed768d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:24Z\\\",\\\"message\\\":\\\"2026-02-02T06:46:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2\\\\n2026-02-02T06:46:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2 to /host/opt/cni/bin/\\\\n2026-02-02T06:46:39Z [verbose] multus-daemon started\\\\n2026-02-02T06:46:39Z [verbose] Readiness Indicator file check\\\\n2026-02-02T06:47:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.424292 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 11:28:11.040305754 +0000 UTC Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.433526 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.433616 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.433773 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.433939 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.434166 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.434341 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.435342 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cdf7907-fc51-4fc8-8cd3-5a90a72cc0e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e0a8355ba524fc2aaaf4ceb6c28d2560fcc506a7159f80193563692812f3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eedd0bd7e5b861fdac2d584e9a2854d8936e487a22fbee9364b4203fc22d1205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eedd0bd7e5b861fdac2d584e9a2854d8936e487a22fbee9364b4203fc22d1205\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.460830 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.460887 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.460938 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.460963 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.460989 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:39Z","lastTransitionTime":"2026-02-02T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.564148 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.564255 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.564275 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.564300 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.564318 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:39Z","lastTransitionTime":"2026-02-02T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.667374 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.667433 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.667450 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.667473 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.667492 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:39Z","lastTransitionTime":"2026-02-02T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.770284 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.770343 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.770360 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.770808 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.770896 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:39Z","lastTransitionTime":"2026-02-02T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.874592 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.874647 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.874659 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.874676 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.874688 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:39Z","lastTransitionTime":"2026-02-02T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.979114 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.979165 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.979180 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.979196 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.979208 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:39Z","lastTransitionTime":"2026-02-02T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.082127 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.082553 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.082699 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.082825 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.082967 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:40Z","lastTransitionTime":"2026-02-02T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.188545 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.188605 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.188622 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.188645 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.188664 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:40Z","lastTransitionTime":"2026-02-02T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.292658 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.292967 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.292984 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.293009 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.293027 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:40Z","lastTransitionTime":"2026-02-02T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.395987 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.396057 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.396081 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.396112 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.396131 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:40Z","lastTransitionTime":"2026-02-02T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.424822 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 18:20:24.771503043 +0000 UTC Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.433388 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:40 crc kubenswrapper[4842]: E0202 06:47:40.433612 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.498576 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.498638 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.498657 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.498689 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.498717 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:40Z","lastTransitionTime":"2026-02-02T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.601252 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.601322 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.601343 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.601373 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.601394 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:40Z","lastTransitionTime":"2026-02-02T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.705510 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.705623 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.705648 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.705682 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.705705 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:40Z","lastTransitionTime":"2026-02-02T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.809179 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.809287 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.809309 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.809334 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.809352 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:40Z","lastTransitionTime":"2026-02-02T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.912491 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.912556 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.912580 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.912606 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.912625 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:40Z","lastTransitionTime":"2026-02-02T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.021169 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.021315 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.021357 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.021394 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.021413 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:41Z","lastTransitionTime":"2026-02-02T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.124999 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.125062 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.125073 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.125095 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.125113 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:41Z","lastTransitionTime":"2026-02-02T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.230016 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.230087 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.230106 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.230134 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.230156 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:41Z","lastTransitionTime":"2026-02-02T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.333399 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.333464 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.333481 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.333505 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.333525 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:41Z","lastTransitionTime":"2026-02-02T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.425799 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 09:43:04.201167061 +0000 UTC Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.433285 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.433353 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:41 crc kubenswrapper[4842]: E0202 06:47:41.433491 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.433554 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:41 crc kubenswrapper[4842]: E0202 06:47:41.433739 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:41 crc kubenswrapper[4842]: E0202 06:47:41.433823 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.436349 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.436400 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.436412 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.436434 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.436447 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:41Z","lastTransitionTime":"2026-02-02T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.539835 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.539893 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.539909 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.539935 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.539953 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:41Z","lastTransitionTime":"2026-02-02T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.643355 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.643414 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.643433 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.643458 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.643480 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:41Z","lastTransitionTime":"2026-02-02T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.747194 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.747306 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.747347 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.747382 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.747404 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:41Z","lastTransitionTime":"2026-02-02T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.850077 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.850138 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.850184 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.850211 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.850259 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:41Z","lastTransitionTime":"2026-02-02T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.953660 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.953753 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.953789 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.953820 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.953845 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:41Z","lastTransitionTime":"2026-02-02T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.057671 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.057724 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.057740 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.057765 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.057780 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:42Z","lastTransitionTime":"2026-02-02T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.161021 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.161246 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.161274 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.161306 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.161328 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:42Z","lastTransitionTime":"2026-02-02T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.265619 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.265788 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.265814 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.265846 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.265885 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:42Z","lastTransitionTime":"2026-02-02T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.368823 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.368877 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.368894 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.368920 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.368941 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:42Z","lastTransitionTime":"2026-02-02T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.426934 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 20:50:27.123580124 +0000 UTC Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.433359 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:42 crc kubenswrapper[4842]: E0202 06:47:42.433564 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.472934 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.473016 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.473042 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.473369 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.473442 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:42Z","lastTransitionTime":"2026-02-02T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.576650 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.576735 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.576753 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.576779 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.576797 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:42Z","lastTransitionTime":"2026-02-02T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.680730 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.680781 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.680800 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.680826 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.680843 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:42Z","lastTransitionTime":"2026-02-02T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.783675 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.783718 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.783734 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.783772 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.783789 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:42Z","lastTransitionTime":"2026-02-02T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.886941 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.887015 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.887043 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.887076 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.887097 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:42Z","lastTransitionTime":"2026-02-02T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.990236 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.990296 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.990307 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.990326 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.990339 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:42Z","lastTransitionTime":"2026-02-02T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.093986 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.094039 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.094056 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.094080 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.094096 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:43Z","lastTransitionTime":"2026-02-02T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.197304 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.197357 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.197369 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.197387 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.197398 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:43Z","lastTransitionTime":"2026-02-02T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.300726 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.300782 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.300799 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.300823 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.300840 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:43Z","lastTransitionTime":"2026-02-02T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.404702 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.404756 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.404766 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.404787 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.404802 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:43Z","lastTransitionTime":"2026-02-02T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.427744 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 04:05:17.273484153 +0000 UTC Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.433210 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.433275 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.433621 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:43 crc kubenswrapper[4842]: E0202 06:47:43.433822 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:43 crc kubenswrapper[4842]: E0202 06:47:43.433999 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:43 crc kubenswrapper[4842]: E0202 06:47:43.434359 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.508169 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.508289 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.508310 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.508338 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.508355 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:43Z","lastTransitionTime":"2026-02-02T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.610836 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.610897 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.610913 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.610938 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.610957 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:43Z","lastTransitionTime":"2026-02-02T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.713705 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.713777 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.713802 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.713829 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.713846 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:43Z","lastTransitionTime":"2026-02-02T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.816185 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.816286 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.816313 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.816343 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.816364 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:43Z","lastTransitionTime":"2026-02-02T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.919730 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.919797 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.919819 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.919850 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.919871 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:43Z","lastTransitionTime":"2026-02-02T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.022171 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.022265 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.022291 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.022320 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.022343 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:44Z","lastTransitionTime":"2026-02-02T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.125596 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.125658 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.125676 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.125700 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.125718 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:44Z","lastTransitionTime":"2026-02-02T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.228278 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.228348 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.228375 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.228402 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.228422 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:44Z","lastTransitionTime":"2026-02-02T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.331659 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.331744 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.331763 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.331791 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.331809 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:44Z","lastTransitionTime":"2026-02-02T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.428145 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 15:58:01.504770272 +0000 UTC Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.432660 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:44 crc kubenswrapper[4842]: E0202 06:47:44.432957 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.434762 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.434839 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.434855 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.434891 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.434907 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:44Z","lastTransitionTime":"2026-02-02T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.538366 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.538870 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.539068 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.539294 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.539495 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:44Z","lastTransitionTime":"2026-02-02T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.642607 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.642855 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.642934 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.642998 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.643053 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:44Z","lastTransitionTime":"2026-02-02T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.745654 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.746258 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.746380 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.746470 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.746572 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:44Z","lastTransitionTime":"2026-02-02T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.848610 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.848657 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.848668 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.848684 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.848695 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:44Z","lastTransitionTime":"2026-02-02T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.958002 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.958047 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.958692 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.958714 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.958728 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:44Z","lastTransitionTime":"2026-02-02T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.061771 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.062130 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.062349 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.062594 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.062752 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:45Z","lastTransitionTime":"2026-02-02T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.166861 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.166916 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.166934 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.166959 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.166977 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:45Z","lastTransitionTime":"2026-02-02T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.270486 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.270941 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.271154 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.271375 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.271587 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:45Z","lastTransitionTime":"2026-02-02T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.375521 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.375931 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.376257 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.376445 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.376632 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:45Z","lastTransitionTime":"2026-02-02T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.429184 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 15:42:21.544717261 +0000 UTC Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.432516 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.432775 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.432561 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:45 crc kubenswrapper[4842]: E0202 06:47:45.432997 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:45 crc kubenswrapper[4842]: E0202 06:47:45.433068 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:45 crc kubenswrapper[4842]: E0202 06:47:45.433266 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.445749 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.465560 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ef51b68530b7f2b8f5c7e049ebba4820dd4f4f0a8efd0feba8f483ed768d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:24Z\\\",\\\"message\\\":\\\"2026-02-02T06:46:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2\\\\n2026-02-02T06:46:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2 to /host/opt/cni/bin/\\\\n2026-02-02T06:46:39Z [verbose] multus-daemon started\\\\n2026-02-02T06:46:39Z [verbose] Readiness Indicator file check\\\\n2026-02-02T06:47:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.481671 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.481765 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.481791 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.481824 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.481850 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:45Z","lastTransitionTime":"2026-02-02T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.483195 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cdf7907-fc51-4fc8-8cd3-5a90a72cc0e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e0a8355ba524fc2aaaf4ceb6c28d2560fcc506a7159f80193563692812f3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eedd0bd7e5b861fdac2d584e9a2854d8936e487a22fbee9364b4203fc22d1205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eedd0bd7e5b861fdac2d584e9a2854d8936e487a22fbee9364b4203fc22d1205\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.501901 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.519162 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.534021 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.553049 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.570149 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.583915 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.583956 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.583967 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.583986 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.584000 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:45Z","lastTransitionTime":"2026-02-02T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.588403 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.603435 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.634530 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"ndler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 06:47:37.456333 6892 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 06:47:37.456337 6892 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 06:47:37.456374 6892 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 06:47:37.456388 6892 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 06:47:37.456407 6892 factory.go:656] Stopping watch factory\\\\nI0202 06:47:37.456419 6892 ovnkube.go:599] Stopped ovnkube\\\\nI0202 06:47:37.456444 6892 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 06:47:37.456451 6892 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 06:47:37.456458 6892 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 06:47:37.456463 6892 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 06:47:37.456473 6892 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 06:47:37.456479 6892 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 06:47:37.456485 6892 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 06:47:37.456490 6892 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 06:47:37.456499 6892 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 06:47:37.456549 6892 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:47:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.653179 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.670511 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.686912 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.686974 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.686991 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.687015 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.687032 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:45Z","lastTransitionTime":"2026-02-02T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.687435 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.704913 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.726158 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.745619 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b888f8bf-78c9-4e73-bfa5-521f549b345e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bea776dbb154f5435006d46f8f410c0b0cb8c955f594cf39e4b707d4d99e619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356ee9ccf90dd6a4aade1846889e97e195457f8a54c572eb8c8fd216fb5315f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f2d3d4011b1076ea5c6892ec39059c3c43c73860bae0828cd0fa3b2c86cccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.765569 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.789859 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.789911 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.789925 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.789945 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.789958 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:45Z","lastTransitionTime":"2026-02-02T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.893619 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.893669 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.893688 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.893714 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.893732 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:45Z","lastTransitionTime":"2026-02-02T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:45.996913 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:45.996956 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:45.996972 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:45.996995 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:45.997013 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:45Z","lastTransitionTime":"2026-02-02T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.100664 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.100775 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.100794 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.100819 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.100836 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:46Z","lastTransitionTime":"2026-02-02T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.204729 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.204797 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.204817 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.204844 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.204863 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:46Z","lastTransitionTime":"2026-02-02T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.308628 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.308687 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.308703 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.308725 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.308742 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:46Z","lastTransitionTime":"2026-02-02T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.412597 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.412658 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.412675 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.412703 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.412721 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:46Z","lastTransitionTime":"2026-02-02T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.430009 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 20:26:00.759434323 +0000 UTC Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.433462 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:46 crc kubenswrapper[4842]: E0202 06:47:46.433632 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.515392 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.515445 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.515455 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.515477 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.515490 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:46Z","lastTransitionTime":"2026-02-02T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.619044 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.619097 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.619115 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.619138 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.619155 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:46Z","lastTransitionTime":"2026-02-02T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.722493 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.722565 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.722589 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.722623 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.722649 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:46Z","lastTransitionTime":"2026-02-02T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.825808 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.825871 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.825888 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.825913 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.825932 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:46Z","lastTransitionTime":"2026-02-02T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.929327 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.929407 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.929434 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.929465 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.929484 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:46Z","lastTransitionTime":"2026-02-02T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.032753 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.032807 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.032826 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.032852 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.032909 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:47Z","lastTransitionTime":"2026-02-02T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.136111 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.136171 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.136184 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.136204 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.136237 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:47Z","lastTransitionTime":"2026-02-02T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.239493 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.239530 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.239542 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.239559 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.239571 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:47Z","lastTransitionTime":"2026-02-02T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.342761 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.342810 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.342819 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.342836 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.342847 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:47Z","lastTransitionTime":"2026-02-02T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.430656 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 05:01:00.808040108 +0000 UTC Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.433185 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.433264 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.433397 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:47 crc kubenswrapper[4842]: E0202 06:47:47.433506 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:47 crc kubenswrapper[4842]: E0202 06:47:47.433813 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:47 crc kubenswrapper[4842]: E0202 06:47:47.433691 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.445998 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.446080 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.446099 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.446125 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.446142 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:47Z","lastTransitionTime":"2026-02-02T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.549196 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.549298 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.549316 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.549343 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.549362 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:47Z","lastTransitionTime":"2026-02-02T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.652518 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.652577 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.652594 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.652618 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.652639 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:47Z","lastTransitionTime":"2026-02-02T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.756577 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.756654 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.756677 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.756707 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.756729 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:47Z","lastTransitionTime":"2026-02-02T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.816862 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.816923 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.816946 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.816976 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.816998 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:47Z","lastTransitionTime":"2026-02-02T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:47 crc kubenswrapper[4842]: E0202 06:47:47.838993 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.843775 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.843827 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.843844 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.843867 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.843886 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:47Z","lastTransitionTime":"2026-02-02T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:47 crc kubenswrapper[4842]: E0202 06:47:47.861262 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.870793 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.871440 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.871642 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.871918 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.872110 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:47Z","lastTransitionTime":"2026-02-02T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:47 crc kubenswrapper[4842]: E0202 06:47:47.891042 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.896970 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.897023 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.897045 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.897074 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.897097 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:47Z","lastTransitionTime":"2026-02-02T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:47 crc kubenswrapper[4842]: E0202 06:47:47.915066 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.920933 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.921006 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.921031 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.921061 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.921083 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:47Z","lastTransitionTime":"2026-02-02T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:47 crc kubenswrapper[4842]: E0202 06:47:47.937204 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:47 crc kubenswrapper[4842]: E0202 06:47:47.938316 4842 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.941053 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.941120 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.941144 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.941173 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.941195 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:47Z","lastTransitionTime":"2026-02-02T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.044206 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.044306 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.044318 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.044340 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.044355 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:48Z","lastTransitionTime":"2026-02-02T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.147194 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.147293 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.147314 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.147339 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.147359 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:48Z","lastTransitionTime":"2026-02-02T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.250605 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.250670 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.250694 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.250721 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.250739 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:48Z","lastTransitionTime":"2026-02-02T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.353107 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.353171 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.353195 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.353268 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.353302 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:48Z","lastTransitionTime":"2026-02-02T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.431752 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 07:08:43.790204903 +0000 UTC Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.433105 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:48 crc kubenswrapper[4842]: E0202 06:47:48.433345 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.456561 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.456607 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.456624 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.456646 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.456666 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:48Z","lastTransitionTime":"2026-02-02T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.559780 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.559819 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.559829 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.559843 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.559853 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:48Z","lastTransitionTime":"2026-02-02T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.662246 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.662308 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.662326 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.662351 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.662370 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:48Z","lastTransitionTime":"2026-02-02T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.765400 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.765474 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.765492 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.765518 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.765538 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:48Z","lastTransitionTime":"2026-02-02T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.868905 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.868979 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.869002 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.869032 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.869054 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:48Z","lastTransitionTime":"2026-02-02T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.972540 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.972615 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.972653 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.972685 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.972708 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:48Z","lastTransitionTime":"2026-02-02T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.075939 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.075997 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.076016 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.076040 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.076058 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:49Z","lastTransitionTime":"2026-02-02T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.179044 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.179113 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.179123 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.179157 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.179169 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:49Z","lastTransitionTime":"2026-02-02T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.281757 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.281834 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.281853 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.281877 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.281893 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:49Z","lastTransitionTime":"2026-02-02T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.384607 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.384685 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.384711 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.384738 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.384757 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:49Z","lastTransitionTime":"2026-02-02T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.432718 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 21:11:45.283241535 +0000 UTC Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.432937 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:49 crc kubenswrapper[4842]: E0202 06:47:49.433044 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.433183 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:49 crc kubenswrapper[4842]: E0202 06:47:49.433336 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.433576 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:49 crc kubenswrapper[4842]: E0202 06:47:49.433838 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.488862 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.489207 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.489515 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.490204 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.490472 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:49Z","lastTransitionTime":"2026-02-02T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.593281 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.593327 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.593338 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.593358 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.593371 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:49Z","lastTransitionTime":"2026-02-02T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.696963 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.697030 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.697051 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.697078 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.697095 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:49Z","lastTransitionTime":"2026-02-02T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.799825 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.800024 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.800200 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.800397 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.800529 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:49Z","lastTransitionTime":"2026-02-02T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.904034 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.904080 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.904097 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.904122 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.904141 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:49Z","lastTransitionTime":"2026-02-02T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.006691 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.006738 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.006754 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.006778 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.006800 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:50Z","lastTransitionTime":"2026-02-02T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.109988 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.110066 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.110085 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.110685 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.110750 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:50Z","lastTransitionTime":"2026-02-02T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.213735 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.213859 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.213877 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.213905 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.213926 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:50Z","lastTransitionTime":"2026-02-02T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.322161 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.322264 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.322288 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.322318 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.322339 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:50Z","lastTransitionTime":"2026-02-02T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.425820 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.426091 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.426302 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.426476 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.426687 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:50Z","lastTransitionTime":"2026-02-02T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.433296 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 22:28:54.667052284 +0000 UTC Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.433457 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:50 crc kubenswrapper[4842]: E0202 06:47:50.433636 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.529376 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.529426 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.529445 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.529470 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.529489 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:50Z","lastTransitionTime":"2026-02-02T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.631864 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.632164 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.632372 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.632643 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.632817 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:50Z","lastTransitionTime":"2026-02-02T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.735922 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.735986 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.736010 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.736038 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.736062 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:50Z","lastTransitionTime":"2026-02-02T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.839308 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.839359 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.839375 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.839397 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.839413 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:50Z","lastTransitionTime":"2026-02-02T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.941712 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.941775 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.941792 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.941818 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.941837 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:50Z","lastTransitionTime":"2026-02-02T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.044661 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.044733 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.044755 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.044784 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.044804 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:51Z","lastTransitionTime":"2026-02-02T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.147563 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.147621 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.147640 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.147666 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.147685 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:51Z","lastTransitionTime":"2026-02-02T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.251050 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.251143 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.251167 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.251196 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.251243 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:51Z","lastTransitionTime":"2026-02-02T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.353959 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.354031 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.354061 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.354135 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.354160 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:51Z","lastTransitionTime":"2026-02-02T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.432682 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.432721 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:51 crc kubenswrapper[4842]: E0202 06:47:51.433008 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:51 crc kubenswrapper[4842]: E0202 06:47:51.433079 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.433317 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:51 crc kubenswrapper[4842]: E0202 06:47:51.433570 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.433612 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 00:34:27.928649099 +0000 UTC Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.457673 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.457986 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.458130 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.458349 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.458517 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:51Z","lastTransitionTime":"2026-02-02T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.561799 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.562135 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.562372 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.562597 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.562800 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:51Z","lastTransitionTime":"2026-02-02T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.666508 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.666564 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.666582 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.666604 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.666621 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:51Z","lastTransitionTime":"2026-02-02T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.770203 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.770283 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.770301 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.770327 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.770350 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:51Z","lastTransitionTime":"2026-02-02T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.872861 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.872933 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.872950 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.872974 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.873000 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:51Z","lastTransitionTime":"2026-02-02T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.980408 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.980452 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.980465 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.980482 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.980493 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:51Z","lastTransitionTime":"2026-02-02T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.083770 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.083842 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.083861 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.083907 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.083922 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:52Z","lastTransitionTime":"2026-02-02T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.186813 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.186876 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.186895 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.186923 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.186941 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:52Z","lastTransitionTime":"2026-02-02T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.290483 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.290565 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.290583 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.290610 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.290628 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:52Z","lastTransitionTime":"2026-02-02T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.394534 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.394617 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.394645 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.394679 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.394706 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:52Z","lastTransitionTime":"2026-02-02T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.432912 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:52 crc kubenswrapper[4842]: E0202 06:47:52.433380 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.433907 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 10:27:40.014812412 +0000 UTC Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.498300 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.498379 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.498398 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.498424 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.498442 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:52Z","lastTransitionTime":"2026-02-02T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.601613 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.601666 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.601683 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.601707 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.601724 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:52Z","lastTransitionTime":"2026-02-02T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.705354 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.705413 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.705429 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.705452 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.705469 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:52Z","lastTransitionTime":"2026-02-02T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.808267 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.808322 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.808340 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.808367 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.808386 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:52Z","lastTransitionTime":"2026-02-02T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.911755 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.911816 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.911834 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.911858 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.911876 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:52Z","lastTransitionTime":"2026-02-02T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.015732 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.015808 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.015832 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.015862 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.015881 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:53Z","lastTransitionTime":"2026-02-02T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.119166 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.119284 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.119309 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.119340 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.119363 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:53Z","lastTransitionTime":"2026-02-02T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.222904 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.222963 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.222980 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.223003 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.223024 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:53Z","lastTransitionTime":"2026-02-02T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.326089 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.326181 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.326199 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.326277 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.326297 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:53Z","lastTransitionTime":"2026-02-02T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.429291 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.429362 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.429387 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.429414 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.429433 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:53Z","lastTransitionTime":"2026-02-02T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.432504 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.432606 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:53 crc kubenswrapper[4842]: E0202 06:47:53.432669 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.432706 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:53 crc kubenswrapper[4842]: E0202 06:47:53.433009 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:53 crc kubenswrapper[4842]: E0202 06:47:53.433111 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.434162 4842 scope.go:117] "RemoveContainer" containerID="72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716" Feb 02 06:47:53 crc kubenswrapper[4842]: E0202 06:47:53.434437 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\"" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.434512 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 06:40:37.36169692 +0000 UTC Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.533447 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.533512 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.533536 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.533562 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.533582 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:53Z","lastTransitionTime":"2026-02-02T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.637890 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.638500 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.638549 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.638584 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.638608 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:53Z","lastTransitionTime":"2026-02-02T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.741332 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.741396 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.741408 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.741425 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.741751 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:53Z","lastTransitionTime":"2026-02-02T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.844969 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.845054 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.845073 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.845100 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.845119 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:53Z","lastTransitionTime":"2026-02-02T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.948553 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.948613 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.948633 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.948659 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.948677 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:53Z","lastTransitionTime":"2026-02-02T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.051673 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.051743 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.051761 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.051787 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.051804 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:54Z","lastTransitionTime":"2026-02-02T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.155515 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.155591 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.155615 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.155644 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.155666 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:54Z","lastTransitionTime":"2026-02-02T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.259794 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.259929 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.259955 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.259993 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.260018 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:54Z","lastTransitionTime":"2026-02-02T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.363272 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.363319 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.363329 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.363347 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.363360 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:54Z","lastTransitionTime":"2026-02-02T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.433479 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:54 crc kubenswrapper[4842]: E0202 06:47:54.433654 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.434971 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 14:06:13.98594947 +0000 UTC Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.466532 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.466598 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.466612 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.466631 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.466648 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:54Z","lastTransitionTime":"2026-02-02T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.569592 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.569664 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.569682 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.569709 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.569728 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:54Z","lastTransitionTime":"2026-02-02T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.671920 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.672004 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.672041 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.672072 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.672095 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:54Z","lastTransitionTime":"2026-02-02T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.775405 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.775504 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.775531 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.775566 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.775590 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:54Z","lastTransitionTime":"2026-02-02T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.879192 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.879315 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.879335 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.879369 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.879396 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:54Z","lastTransitionTime":"2026-02-02T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.983202 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.983281 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.983298 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.983322 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.983340 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:54Z","lastTransitionTime":"2026-02-02T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.086804 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.086877 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.086896 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.086923 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.086941 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:55Z","lastTransitionTime":"2026-02-02T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.190097 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.190149 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.190160 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.190177 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.190191 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:55Z","lastTransitionTime":"2026-02-02T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.256407 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs\") pod \"network-metrics-daemon-9chjr\" (UID: \"4f6c3b51-669c-4c7b-a23a-ed68d139849e\") " pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:55 crc kubenswrapper[4842]: E0202 06:47:55.256665 4842 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 06:47:55 crc kubenswrapper[4842]: E0202 06:47:55.256777 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs podName:4f6c3b51-669c-4c7b-a23a-ed68d139849e nodeName:}" failed. No retries permitted until 2026-02-02 06:48:59.256748463 +0000 UTC m=+164.634016385 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs") pod "network-metrics-daemon-9chjr" (UID: "4f6c3b51-669c-4c7b-a23a-ed68d139849e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.294394 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.294456 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.294478 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.294503 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.294522 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:55Z","lastTransitionTime":"2026-02-02T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.398901 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.398966 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.398984 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.399017 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.399035 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:55Z","lastTransitionTime":"2026-02-02T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.433471 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.433662 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.433905 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:55 crc kubenswrapper[4842]: E0202 06:47:55.434042 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:55 crc kubenswrapper[4842]: E0202 06:47:55.434191 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:55 crc kubenswrapper[4842]: E0202 06:47:55.435060 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.435161 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 16:33:09.009243815 +0000 UTC Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.452508 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.470727 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.491577 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.501798 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.501888 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.501907 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.501961 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.501979 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:55Z","lastTransitionTime":"2026-02-02T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.512609 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.532479 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.565483 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"ndler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 06:47:37.456333 6892 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 06:47:37.456337 6892 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 06:47:37.456374 6892 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 06:47:37.456388 6892 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 06:47:37.456407 6892 factory.go:656] Stopping watch factory\\\\nI0202 06:47:37.456419 6892 ovnkube.go:599] Stopped ovnkube\\\\nI0202 06:47:37.456444 6892 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 06:47:37.456451 6892 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 06:47:37.456458 6892 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 06:47:37.456463 6892 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 06:47:37.456473 6892 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 06:47:37.456479 6892 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 06:47:37.456485 6892 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 06:47:37.456490 6892 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 06:47:37.456499 6892 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 06:47:37.456549 6892 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:47:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.590299 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.605845 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.605905 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.605925 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.605952 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.605970 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:55Z","lastTransitionTime":"2026-02-02T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.610949 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b888f8bf-78c9-4e73-bfa5-521f549b345e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bea776dbb154f5435006d46f8f410c0b0cb8c955f594cf39e4b707d4d99e619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356ee9ccf90dd6a4aade1846889e97e195457f8a54c572eb8c8fd216fb5315f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f2d3d4011b1076ea5c6892ec39059c3c43c73860bae0828cd0fa3b2c86cccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.631688 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.648369 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cdf7907-fc51-4fc8-8cd3-5a90a72cc0e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e0a8355ba524fc2aaaf4ceb6c28d2560fcc506a7159f80193563692812f3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eedd0bd7e5b861fdac2d584e9a2854d8936e487a22fbee9364b4203fc22d1205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eedd0bd7e5b861fdac2d584e9a2854d8936e487a22fbee9364b4203fc22d1205\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.667397 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.686095 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.702851 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.713029 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.713135 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.713207 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.713306 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.713381 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:55Z","lastTransitionTime":"2026-02-02T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.729321 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ef51b68530b7f2b8f5c7e049ebba4820dd4f4f0a8efd0feba8f483ed768d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:24Z\\\",\\\"message\\\":\\\"2026-02-02T06:46:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2\\\\n2026-02-02T06:46:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2 to /host/opt/cni/bin/\\\\n2026-02-02T06:46:39Z [verbose] multus-daemon started\\\\n2026-02-02T06:46:39Z [verbose] Readiness Indicator file check\\\\n2026-02-02T06:47:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.750264 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.775404 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.790689 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.806484 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.816665 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.816739 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.816772 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.816802 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.816826 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:55Z","lastTransitionTime":"2026-02-02T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.920073 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.920130 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.920146 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.920170 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.920186 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:55Z","lastTransitionTime":"2026-02-02T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.023324 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.023388 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.023407 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.023433 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.023451 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:56Z","lastTransitionTime":"2026-02-02T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.126469 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.126573 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.126593 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.126617 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.126635 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:56Z","lastTransitionTime":"2026-02-02T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.230333 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.230405 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.230425 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.230454 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.230473 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:56Z","lastTransitionTime":"2026-02-02T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.332941 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.332997 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.333014 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.333041 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.333059 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:56Z","lastTransitionTime":"2026-02-02T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.433119 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:56 crc kubenswrapper[4842]: E0202 06:47:56.433365 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.435551 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 07:42:56.360446117 +0000 UTC Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.435706 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.435790 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.435813 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.435846 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.435873 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:56Z","lastTransitionTime":"2026-02-02T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.539051 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.539123 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.539146 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.539176 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.539194 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:56Z","lastTransitionTime":"2026-02-02T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.641968 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.642040 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.642058 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.642086 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.642109 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:56Z","lastTransitionTime":"2026-02-02T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.745233 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.745544 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.745623 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.745691 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.745771 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:56Z","lastTransitionTime":"2026-02-02T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.849306 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.849556 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.849618 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.849684 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.849748 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:56Z","lastTransitionTime":"2026-02-02T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.952563 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.952669 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.952690 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.952716 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.952735 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:56Z","lastTransitionTime":"2026-02-02T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.055921 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.055979 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.055996 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.056022 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.056062 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:57Z","lastTransitionTime":"2026-02-02T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.159045 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.159135 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.159158 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.159184 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.159205 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:57Z","lastTransitionTime":"2026-02-02T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.262688 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.262748 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.262766 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.262799 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.262822 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:57Z","lastTransitionTime":"2026-02-02T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.365593 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.365679 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.365696 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.365720 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.365736 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:57Z","lastTransitionTime":"2026-02-02T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.433553 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.434620 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.434787 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:57 crc kubenswrapper[4842]: E0202 06:47:57.434900 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:57 crc kubenswrapper[4842]: E0202 06:47:57.434774 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:57 crc kubenswrapper[4842]: E0202 06:47:57.435013 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.436053 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 16:42:07.177750561 +0000 UTC Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.457127 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.468752 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.468839 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.468860 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.468884 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.468905 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:57Z","lastTransitionTime":"2026-02-02T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.572123 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.572194 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.572212 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.572269 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.572288 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:57Z","lastTransitionTime":"2026-02-02T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.675710 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.675781 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.675795 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.675817 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.675833 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:57Z","lastTransitionTime":"2026-02-02T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.778807 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.778897 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.778922 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.779020 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.779051 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:57Z","lastTransitionTime":"2026-02-02T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.882970 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.883050 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.883069 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.883095 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.883114 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:57Z","lastTransitionTime":"2026-02-02T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.986430 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.986501 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.986524 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.986556 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.986583 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:57Z","lastTransitionTime":"2026-02-02T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.090517 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.090594 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.090618 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.090652 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.090716 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:58Z","lastTransitionTime":"2026-02-02T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.193889 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.193995 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.194016 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.194046 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.194065 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:58Z","lastTransitionTime":"2026-02-02T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.298029 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.298103 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.298121 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.298150 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.298170 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:58Z","lastTransitionTime":"2026-02-02T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.329686 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.329762 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.329783 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.329813 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.329836 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:58Z","lastTransitionTime":"2026-02-02T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:58 crc kubenswrapper[4842]: E0202 06:47:58.351143 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.357017 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.357083 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.357112 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.357147 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.357174 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:58Z","lastTransitionTime":"2026-02-02T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:58 crc kubenswrapper[4842]: E0202 06:47:58.379944 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.387276 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.387355 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.387374 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.387401 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.387420 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:58Z","lastTransitionTime":"2026-02-02T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:58 crc kubenswrapper[4842]: E0202 06:47:58.410451 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.417545 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.417631 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.417652 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.417721 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.417741 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:58Z","lastTransitionTime":"2026-02-02T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.433460 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:58 crc kubenswrapper[4842]: E0202 06:47:58.433625 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.436647 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 08:18:01.385320899 +0000 UTC Feb 02 06:47:58 crc kubenswrapper[4842]: E0202 06:47:58.443950 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.450078 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.450133 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.450151 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.450180 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.450200 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:58Z","lastTransitionTime":"2026-02-02T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:58 crc kubenswrapper[4842]: E0202 06:47:58.472544 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:58 crc kubenswrapper[4842]: E0202 06:47:58.472787 4842 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.475629 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.475689 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.475711 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.475740 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.475761 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:58Z","lastTransitionTime":"2026-02-02T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.578876 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.578969 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.578988 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.579023 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.579042 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:58Z","lastTransitionTime":"2026-02-02T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.682364 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.682432 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.682446 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.682472 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.682490 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:58Z","lastTransitionTime":"2026-02-02T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.786546 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.786595 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.786604 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.786623 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.786635 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:58Z","lastTransitionTime":"2026-02-02T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.890414 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.890490 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.890510 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.890538 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.890556 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:58Z","lastTransitionTime":"2026-02-02T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.993893 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.993960 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.993977 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.994004 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.994024 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:58Z","lastTransitionTime":"2026-02-02T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.097083 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.097168 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.097186 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.097213 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.097255 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:59Z","lastTransitionTime":"2026-02-02T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.200607 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.200680 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.200704 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.200737 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.200761 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:59Z","lastTransitionTime":"2026-02-02T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.304338 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.304425 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.304451 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.304480 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.304498 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:59Z","lastTransitionTime":"2026-02-02T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.408476 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.408576 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.408594 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.409074 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.409122 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:59Z","lastTransitionTime":"2026-02-02T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.432705 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:59 crc kubenswrapper[4842]: E0202 06:47:59.432860 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.433077 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:59 crc kubenswrapper[4842]: E0202 06:47:59.433123 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.433264 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:59 crc kubenswrapper[4842]: E0202 06:47:59.433315 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.437127 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 22:20:58.457188597 +0000 UTC Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.519039 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.519104 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.519117 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.519140 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.519158 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:59Z","lastTransitionTime":"2026-02-02T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.622379 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.622441 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.622453 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.622471 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.622483 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:59Z","lastTransitionTime":"2026-02-02T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.726343 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.726406 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.726419 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.726442 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.726455 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:59Z","lastTransitionTime":"2026-02-02T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.830711 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.830771 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.830784 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.830803 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.830815 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:59Z","lastTransitionTime":"2026-02-02T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.934693 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.934757 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.934768 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.934793 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.934806 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:59Z","lastTransitionTime":"2026-02-02T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.037842 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.037905 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.037916 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.037935 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.037948 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:00Z","lastTransitionTime":"2026-02-02T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.141504 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.141578 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.141598 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.141629 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.141649 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:00Z","lastTransitionTime":"2026-02-02T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.245002 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.245062 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.245076 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.245099 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.245115 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:00Z","lastTransitionTime":"2026-02-02T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.347878 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.347942 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.347959 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.347985 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.348002 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:00Z","lastTransitionTime":"2026-02-02T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.433169 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:00 crc kubenswrapper[4842]: E0202 06:48:00.433408 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.438286 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 12:22:41.196169222 +0000 UTC Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.450522 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.450574 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.450593 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.450613 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.450626 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:00Z","lastTransitionTime":"2026-02-02T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.553914 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.553973 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.553996 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.554027 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.554049 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:00Z","lastTransitionTime":"2026-02-02T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.656901 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.656954 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.656964 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.656983 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.656997 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:00Z","lastTransitionTime":"2026-02-02T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.760014 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.760112 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.760128 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.760155 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.760171 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:00Z","lastTransitionTime":"2026-02-02T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.863883 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.863942 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.863955 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.863974 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.863988 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:00Z","lastTransitionTime":"2026-02-02T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.967702 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.967774 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.967794 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.967821 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.967840 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:00Z","lastTransitionTime":"2026-02-02T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.071712 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.071891 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.071921 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.071959 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.071977 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:01Z","lastTransitionTime":"2026-02-02T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.174574 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.174637 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.174656 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.174680 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.174701 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:01Z","lastTransitionTime":"2026-02-02T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.277696 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.277747 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.277758 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.277777 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.277790 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:01Z","lastTransitionTime":"2026-02-02T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.381373 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.381437 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.381457 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.381486 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.381506 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:01Z","lastTransitionTime":"2026-02-02T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.433050 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.433074 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:01 crc kubenswrapper[4842]: E0202 06:48:01.433379 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.433424 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:01 crc kubenswrapper[4842]: E0202 06:48:01.433558 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:48:01 crc kubenswrapper[4842]: E0202 06:48:01.433771 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.438389 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 21:23:17.050143336 +0000 UTC Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.483882 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.483955 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.483974 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.483996 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.484014 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:01Z","lastTransitionTime":"2026-02-02T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.587601 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.587663 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.587680 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.587708 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.587729 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:01Z","lastTransitionTime":"2026-02-02T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.690964 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.691014 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.691032 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.691054 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.691070 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:01Z","lastTransitionTime":"2026-02-02T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.795008 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.795087 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.795108 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.795137 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.795161 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:01Z","lastTransitionTime":"2026-02-02T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.898771 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.898834 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.898857 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.898886 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.899030 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:01Z","lastTransitionTime":"2026-02-02T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.002832 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.002916 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.002942 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.002978 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.002999 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:02Z","lastTransitionTime":"2026-02-02T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.107046 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.107519 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.107659 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.107814 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.107953 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:02Z","lastTransitionTime":"2026-02-02T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.211465 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.211542 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.211568 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.211603 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.211630 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:02Z","lastTransitionTime":"2026-02-02T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.314321 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.314385 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.314412 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.314439 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.314460 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:02Z","lastTransitionTime":"2026-02-02T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.417731 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.417792 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.417809 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.417833 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.417850 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:02Z","lastTransitionTime":"2026-02-02T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.432741 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:02 crc kubenswrapper[4842]: E0202 06:48:02.433006 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.438930 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 05:09:26.213153981 +0000 UTC Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.521466 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.521544 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.521565 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.521594 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.521614 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:02Z","lastTransitionTime":"2026-02-02T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.625161 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.625302 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.625325 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.625353 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.625373 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:02Z","lastTransitionTime":"2026-02-02T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.728147 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.728271 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.728292 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.728348 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.728368 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:02Z","lastTransitionTime":"2026-02-02T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.831563 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.831720 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.831743 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.831770 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.831818 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:02Z","lastTransitionTime":"2026-02-02T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.935913 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.936008 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.936058 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.936093 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.936162 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:02Z","lastTransitionTime":"2026-02-02T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.040602 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.040696 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.040719 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.040752 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.040776 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:03Z","lastTransitionTime":"2026-02-02T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.143983 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.144047 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.144065 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.144089 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.144109 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:03Z","lastTransitionTime":"2026-02-02T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.246962 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.247032 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.247057 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.247090 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.247113 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:03Z","lastTransitionTime":"2026-02-02T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.350269 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.350339 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.350364 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.350392 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.350414 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:03Z","lastTransitionTime":"2026-02-02T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.433651 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.433694 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.433709 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:03 crc kubenswrapper[4842]: E0202 06:48:03.433866 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:48:03 crc kubenswrapper[4842]: E0202 06:48:03.434006 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:48:03 crc kubenswrapper[4842]: E0202 06:48:03.434402 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.439037 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 18:36:40.648200001 +0000 UTC Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.452970 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.453023 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.453040 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.453064 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.453085 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:03Z","lastTransitionTime":"2026-02-02T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.556736 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.556810 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.556828 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.556856 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.556875 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:03Z","lastTransitionTime":"2026-02-02T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.660084 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.660137 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.660154 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.660194 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.660213 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:03Z","lastTransitionTime":"2026-02-02T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.764007 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.764078 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.764096 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.764122 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.764142 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:03Z","lastTransitionTime":"2026-02-02T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.866951 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.867437 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.867580 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.867722 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.867880 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:03Z","lastTransitionTime":"2026-02-02T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.970548 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.970844 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.970919 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.970983 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.971038 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:03Z","lastTransitionTime":"2026-02-02T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.074564 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.074627 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.074644 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.074671 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.074688 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:04Z","lastTransitionTime":"2026-02-02T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.177617 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.178057 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.178283 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.178531 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.178745 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:04Z","lastTransitionTime":"2026-02-02T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.282537 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.282591 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.282604 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.282620 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.282651 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:04Z","lastTransitionTime":"2026-02-02T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.386058 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.386919 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.387072 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.387241 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.387384 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:04Z","lastTransitionTime":"2026-02-02T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.432835 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:04 crc kubenswrapper[4842]: E0202 06:48:04.433360 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.440061 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 16:01:35.196423378 +0000 UTC Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.491113 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.491173 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.491193 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.491247 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.491266 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:04Z","lastTransitionTime":"2026-02-02T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.594924 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.594993 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.595016 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.595046 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.595072 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:04Z","lastTransitionTime":"2026-02-02T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.698326 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.698450 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.698469 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.698494 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.698514 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:04Z","lastTransitionTime":"2026-02-02T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.801844 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.801904 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.801921 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.801944 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.801962 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:04Z","lastTransitionTime":"2026-02-02T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.905647 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.905699 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.905717 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.905743 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.905763 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:04Z","lastTransitionTime":"2026-02-02T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.008288 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.008360 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.008385 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.008408 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.008426 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:05Z","lastTransitionTime":"2026-02-02T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.111497 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.111564 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.111582 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.111607 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.111627 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:05Z","lastTransitionTime":"2026-02-02T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.214603 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.214663 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.214681 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.214704 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.214725 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:05Z","lastTransitionTime":"2026-02-02T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.317331 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.317390 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.317408 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.317431 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.317449 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:05Z","lastTransitionTime":"2026-02-02T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.420937 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.421056 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.421076 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.421101 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.421118 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:05Z","lastTransitionTime":"2026-02-02T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.432910 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:05 crc kubenswrapper[4842]: E0202 06:48:05.433056 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.433321 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:05 crc kubenswrapper[4842]: E0202 06:48:05.433422 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.433484 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:05 crc kubenswrapper[4842]: E0202 06:48:05.433683 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.446453 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 12:17:56.485527186 +0000 UTC Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.520486 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=59.520457035 podStartE2EDuration="59.520457035s" podCreationTimestamp="2026-02-02 06:47:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:05.496277841 +0000 UTC m=+110.873545763" watchObservedRunningTime="2026-02-02 06:48:05.520457035 +0000 UTC m=+110.897724987" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.523312 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.523371 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.523388 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.523413 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.523432 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:05Z","lastTransitionTime":"2026-02-02T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.556244 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podStartSLOduration=90.556191237 podStartE2EDuration="1m30.556191237s" podCreationTimestamp="2026-02-02 06:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:05.556072275 +0000 UTC m=+110.933340197" watchObservedRunningTime="2026-02-02 06:48:05.556191237 +0000 UTC m=+110.933459169" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.590767 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-gmkx9" podStartSLOduration=90.590744591 podStartE2EDuration="1m30.590744591s" podCreationTimestamp="2026-02-02 06:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:05.575093533 +0000 UTC m=+110.952361456" watchObservedRunningTime="2026-02-02 06:48:05.590744591 +0000 UTC m=+110.968012503" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.606571 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=32.606550353 podStartE2EDuration="32.606550353s" podCreationTimestamp="2026-02-02 06:47:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:05.591078839 +0000 UTC m=+110.968346751" watchObservedRunningTime="2026-02-02 06:48:05.606550353 +0000 UTC m=+110.983818265" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.618667 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-q2xjl" podStartSLOduration=90.618638254 podStartE2EDuration="1m30.618638254s" podCreationTimestamp="2026-02-02 06:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:05.617519697 +0000 UTC m=+110.994787639" watchObservedRunningTime="2026-02-02 06:48:05.618638254 +0000 UTC m=+110.995906206" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.626270 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.626311 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.626323 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.626342 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.626355 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:05Z","lastTransitionTime":"2026-02-02T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.630987 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-ms7n2" podStartSLOduration=89.630965952 podStartE2EDuration="1m29.630965952s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:05.63047498 +0000 UTC m=+111.007742912" watchObservedRunningTime="2026-02-02 06:48:05.630965952 +0000 UTC m=+111.008233864" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.646273 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=84.646250151 podStartE2EDuration="1m24.646250151s" podCreationTimestamp="2026-02-02 06:46:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:05.645859831 +0000 UTC m=+111.023127783" watchObservedRunningTime="2026-02-02 06:48:05.646250151 +0000 UTC m=+111.023518063" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.728162 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.728205 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.728233 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.728250 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.728261 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:05Z","lastTransitionTime":"2026-02-02T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.764571 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" podStartSLOduration=90.764549116 podStartE2EDuration="1m30.764549116s" podCreationTimestamp="2026-02-02 06:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:05.76390905 +0000 UTC m=+111.141176962" watchObservedRunningTime="2026-02-02 06:48:05.764549116 +0000 UTC m=+111.141817038" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.784506 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" podStartSLOduration=89.784485867 podStartE2EDuration="1m29.784485867s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:05.783828821 +0000 UTC m=+111.161096743" watchObservedRunningTime="2026-02-02 06:48:05.784485867 +0000 UTC m=+111.161753789" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.830658 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.830702 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.830761 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.830779 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.830792 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:05Z","lastTransitionTime":"2026-02-02T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.842593 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=8.842576559 podStartE2EDuration="8.842576559s" podCreationTimestamp="2026-02-02 06:47:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:05.842064776 +0000 UTC m=+111.219332698" watchObservedRunningTime="2026-02-02 06:48:05.842576559 +0000 UTC m=+111.219844461" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.933482 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.933551 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.933571 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.933596 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.933615 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:05Z","lastTransitionTime":"2026-02-02T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.036665 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.036764 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.036783 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.036807 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.036826 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:06Z","lastTransitionTime":"2026-02-02T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.139973 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.140041 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.140059 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.140085 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.140106 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:06Z","lastTransitionTime":"2026-02-02T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.243353 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.243423 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.243441 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.243471 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.243488 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:06Z","lastTransitionTime":"2026-02-02T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.346078 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.346143 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.346162 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.346191 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.346209 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:06Z","lastTransitionTime":"2026-02-02T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.432732 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:06 crc kubenswrapper[4842]: E0202 06:48:06.432958 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.446815 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 23:56:33.488422988 +0000 UTC Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.449267 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.449336 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.449358 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.449388 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.449407 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:06Z","lastTransitionTime":"2026-02-02T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.552711 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.552768 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.552785 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.552809 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.552829 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:06Z","lastTransitionTime":"2026-02-02T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.656139 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.656264 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.656289 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.656334 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.656361 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:06Z","lastTransitionTime":"2026-02-02T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.759875 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.759948 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.759970 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.759999 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.760018 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:06Z","lastTransitionTime":"2026-02-02T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.863744 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.863817 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.863835 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.863861 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.863879 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:06Z","lastTransitionTime":"2026-02-02T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.966743 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.966841 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.966863 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.966895 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.966916 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:06Z","lastTransitionTime":"2026-02-02T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.070572 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.070628 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.070645 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.070668 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.070687 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:07Z","lastTransitionTime":"2026-02-02T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.173156 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.173258 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.173279 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.173303 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.173320 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:07Z","lastTransitionTime":"2026-02-02T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.276732 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.276807 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.276830 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.276864 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.276888 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:07Z","lastTransitionTime":"2026-02-02T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.381387 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.381457 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.381480 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.381510 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.381531 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:07Z","lastTransitionTime":"2026-02-02T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.433603 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.433749 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:07 crc kubenswrapper[4842]: E0202 06:48:07.433816 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.433909 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:07 crc kubenswrapper[4842]: E0202 06:48:07.434126 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:48:07 crc kubenswrapper[4842]: E0202 06:48:07.434286 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.447646 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 03:46:14.331735856 +0000 UTC Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.484792 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.484844 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.484861 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.484885 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.484903 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:07Z","lastTransitionTime":"2026-02-02T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.588718 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.588813 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.588842 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.588875 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.588897 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:07Z","lastTransitionTime":"2026-02-02T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.692702 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.692770 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.692794 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.692821 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.692837 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:07Z","lastTransitionTime":"2026-02-02T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.796095 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.796165 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.796193 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.796278 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.796301 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:07Z","lastTransitionTime":"2026-02-02T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.899434 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.899503 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.899542 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.899604 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.899629 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:07Z","lastTransitionTime":"2026-02-02T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.002780 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.002852 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.002871 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.002898 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.002920 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:08Z","lastTransitionTime":"2026-02-02T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.106055 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.106092 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.106103 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.106119 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.106132 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:08Z","lastTransitionTime":"2026-02-02T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.209302 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.209393 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.209418 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.209445 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.209469 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:08Z","lastTransitionTime":"2026-02-02T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.312864 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.312938 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.312956 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.312982 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.313000 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:08Z","lastTransitionTime":"2026-02-02T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.416942 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.417014 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.417032 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.417057 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.417075 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:08Z","lastTransitionTime":"2026-02-02T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.432790 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:08 crc kubenswrapper[4842]: E0202 06:48:08.433157 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.434207 4842 scope.go:117] "RemoveContainer" containerID="72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716" Feb 02 06:48:08 crc kubenswrapper[4842]: E0202 06:48:08.434496 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\"" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.448163 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 12:38:19.399231974 +0000 UTC Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.520952 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.521003 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.521015 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.521036 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.521050 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:08Z","lastTransitionTime":"2026-02-02T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.624459 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.624530 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.624548 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.624605 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.624625 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:08Z","lastTransitionTime":"2026-02-02T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.727083 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.727151 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.727171 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.727199 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.727255 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:08Z","lastTransitionTime":"2026-02-02T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.773779 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.773858 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.773883 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.773913 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.773932 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:08Z","lastTransitionTime":"2026-02-02T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.848022 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=93.847993827 podStartE2EDuration="1m33.847993827s" podCreationTimestamp="2026-02-02 06:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:05.865576084 +0000 UTC m=+111.242843996" watchObservedRunningTime="2026-02-02 06:48:08.847993827 +0000 UTC m=+114.225261769" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.848736 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv"] Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.849303 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.852019 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.852109 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.852741 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.853055 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.033529 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vkztv\" (UID: \"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.033624 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vkztv\" (UID: \"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.033663 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vkztv\" (UID: \"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.033733 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vkztv\" (UID: \"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.033765 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vkztv\" (UID: \"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.135549 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vkztv\" (UID: \"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.135613 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vkztv\" (UID: \"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.135719 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vkztv\" (UID: \"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.135772 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vkztv\" (UID: \"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.135804 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vkztv\" (UID: \"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.136371 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vkztv\" (UID: \"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.138101 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vkztv\" (UID: \"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.139448 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vkztv\" (UID: \"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.145567 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vkztv\" (UID: \"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.170417 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vkztv\" (UID: \"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.432988 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.432999 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:09 crc kubenswrapper[4842]: E0202 06:48:09.433185 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:48:09 crc kubenswrapper[4842]: E0202 06:48:09.433675 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.433155 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:09 crc kubenswrapper[4842]: E0202 06:48:09.433979 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.448452 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 17:28:45.151910825 +0000 UTC Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.448594 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.460806 4842 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.464956 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:09 crc kubenswrapper[4842]: W0202 06:48:09.489328 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf0a1a0b_5f2b_47fe_ae63_8cc10e9ad69f.slice/crio-8252d758e791e8d1e59944e736717de930e65facdd6aaeca386f560a307180d9 WatchSource:0}: Error finding container 8252d758e791e8d1e59944e736717de930e65facdd6aaeca386f560a307180d9: Status 404 returned error can't find the container with id 8252d758e791e8d1e59944e736717de930e65facdd6aaeca386f560a307180d9 Feb 02 06:48:10 crc kubenswrapper[4842]: I0202 06:48:10.174118 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" event={"ID":"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f","Type":"ContainerStarted","Data":"aeed2cceffd144a699dd9d3912a8f1679c00e3ae944da369141c619a8adfe5f3"} Feb 02 06:48:10 crc kubenswrapper[4842]: I0202 06:48:10.174271 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" event={"ID":"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f","Type":"ContainerStarted","Data":"8252d758e791e8d1e59944e736717de930e65facdd6aaeca386f560a307180d9"} Feb 02 06:48:10 crc kubenswrapper[4842]: I0202 06:48:10.199842 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" podStartSLOduration=94.19980909 podStartE2EDuration="1m34.19980909s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:10.197656638 +0000 UTC m=+115.574924620" watchObservedRunningTime="2026-02-02 06:48:10.19980909 +0000 UTC m=+115.577077032" Feb 02 06:48:10 crc kubenswrapper[4842]: I0202 06:48:10.433336 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:10 crc kubenswrapper[4842]: E0202 06:48:10.433580 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:48:11 crc kubenswrapper[4842]: I0202 06:48:11.180629 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gmkx9_c1fd21cd-ea6a-44a0-b136-f338fc97cf18/kube-multus/1.log" Feb 02 06:48:11 crc kubenswrapper[4842]: I0202 06:48:11.181614 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gmkx9_c1fd21cd-ea6a-44a0-b136-f338fc97cf18/kube-multus/0.log" Feb 02 06:48:11 crc kubenswrapper[4842]: I0202 06:48:11.181673 4842 generic.go:334] "Generic (PLEG): container finished" podID="c1fd21cd-ea6a-44a0-b136-f338fc97cf18" containerID="eb46ef51b68530b7f2b8f5c7e049ebba4820dd4f4f0a8efd0feba8f483ed768d" exitCode=1 Feb 02 06:48:11 crc kubenswrapper[4842]: I0202 06:48:11.181724 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gmkx9" event={"ID":"c1fd21cd-ea6a-44a0-b136-f338fc97cf18","Type":"ContainerDied","Data":"eb46ef51b68530b7f2b8f5c7e049ebba4820dd4f4f0a8efd0feba8f483ed768d"} Feb 02 06:48:11 crc kubenswrapper[4842]: I0202 06:48:11.181775 4842 scope.go:117] "RemoveContainer" containerID="8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d" Feb 02 06:48:11 crc kubenswrapper[4842]: I0202 06:48:11.182418 4842 scope.go:117] "RemoveContainer" containerID="eb46ef51b68530b7f2b8f5c7e049ebba4820dd4f4f0a8efd0feba8f483ed768d" Feb 02 06:48:11 crc kubenswrapper[4842]: E0202 06:48:11.182685 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-gmkx9_openshift-multus(c1fd21cd-ea6a-44a0-b136-f338fc97cf18)\"" pod="openshift-multus/multus-gmkx9" podUID="c1fd21cd-ea6a-44a0-b136-f338fc97cf18" Feb 02 06:48:11 crc kubenswrapper[4842]: I0202 06:48:11.433367 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:11 crc kubenswrapper[4842]: I0202 06:48:11.433527 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:11 crc kubenswrapper[4842]: E0202 06:48:11.433634 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:48:11 crc kubenswrapper[4842]: E0202 06:48:11.433909 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:48:11 crc kubenswrapper[4842]: I0202 06:48:11.434148 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:11 crc kubenswrapper[4842]: E0202 06:48:11.434505 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:48:12 crc kubenswrapper[4842]: I0202 06:48:12.188023 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gmkx9_c1fd21cd-ea6a-44a0-b136-f338fc97cf18/kube-multus/1.log" Feb 02 06:48:12 crc kubenswrapper[4842]: I0202 06:48:12.433347 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:12 crc kubenswrapper[4842]: E0202 06:48:12.433589 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:48:13 crc kubenswrapper[4842]: I0202 06:48:13.432550 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:13 crc kubenswrapper[4842]: I0202 06:48:13.432684 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:13 crc kubenswrapper[4842]: I0202 06:48:13.432747 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:13 crc kubenswrapper[4842]: E0202 06:48:13.435854 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:48:13 crc kubenswrapper[4842]: E0202 06:48:13.436089 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:48:13 crc kubenswrapper[4842]: E0202 06:48:13.436627 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:48:14 crc kubenswrapper[4842]: I0202 06:48:14.433337 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:14 crc kubenswrapper[4842]: E0202 06:48:14.433570 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:48:15 crc kubenswrapper[4842]: I0202 06:48:15.433083 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:15 crc kubenswrapper[4842]: I0202 06:48:15.433183 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:15 crc kubenswrapper[4842]: I0202 06:48:15.433264 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:15 crc kubenswrapper[4842]: E0202 06:48:15.435657 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:48:15 crc kubenswrapper[4842]: E0202 06:48:15.436381 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:48:15 crc kubenswrapper[4842]: E0202 06:48:15.437213 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:48:15 crc kubenswrapper[4842]: E0202 06:48:15.457965 4842 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 02 06:48:15 crc kubenswrapper[4842]: E0202 06:48:15.572662 4842 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 06:48:16 crc kubenswrapper[4842]: I0202 06:48:16.432822 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:16 crc kubenswrapper[4842]: E0202 06:48:16.433068 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:48:17 crc kubenswrapper[4842]: I0202 06:48:17.433287 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:17 crc kubenswrapper[4842]: I0202 06:48:17.433316 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:17 crc kubenswrapper[4842]: E0202 06:48:17.433529 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:48:17 crc kubenswrapper[4842]: I0202 06:48:17.433316 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:17 crc kubenswrapper[4842]: E0202 06:48:17.433683 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:48:17 crc kubenswrapper[4842]: E0202 06:48:17.433823 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:48:18 crc kubenswrapper[4842]: I0202 06:48:18.433103 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:18 crc kubenswrapper[4842]: E0202 06:48:18.433417 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:48:19 crc kubenswrapper[4842]: I0202 06:48:19.433104 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:19 crc kubenswrapper[4842]: I0202 06:48:19.433139 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:19 crc kubenswrapper[4842]: E0202 06:48:19.433299 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:48:19 crc kubenswrapper[4842]: I0202 06:48:19.433364 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:19 crc kubenswrapper[4842]: E0202 06:48:19.434107 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:48:19 crc kubenswrapper[4842]: E0202 06:48:19.434249 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:48:19 crc kubenswrapper[4842]: I0202 06:48:19.434913 4842 scope.go:117] "RemoveContainer" containerID="72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716" Feb 02 06:48:20 crc kubenswrapper[4842]: I0202 06:48:20.221179 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovnkube-controller/3.log" Feb 02 06:48:20 crc kubenswrapper[4842]: I0202 06:48:20.223974 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerStarted","Data":"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2"} Feb 02 06:48:20 crc kubenswrapper[4842]: I0202 06:48:20.224747 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:48:20 crc kubenswrapper[4842]: I0202 06:48:20.260579 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podStartSLOduration=105.260560575 podStartE2EDuration="1m45.260560575s" podCreationTimestamp="2026-02-02 06:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:20.259675903 +0000 UTC m=+125.636943835" watchObservedRunningTime="2026-02-02 06:48:20.260560575 +0000 UTC m=+125.637828487" Feb 02 06:48:20 crc kubenswrapper[4842]: I0202 06:48:20.433039 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:20 crc kubenswrapper[4842]: E0202 06:48:20.433179 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:48:20 crc kubenswrapper[4842]: I0202 06:48:20.471477 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-9chjr"] Feb 02 06:48:20 crc kubenswrapper[4842]: E0202 06:48:20.573839 4842 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 06:48:21 crc kubenswrapper[4842]: I0202 06:48:21.228987 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:21 crc kubenswrapper[4842]: E0202 06:48:21.229709 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:48:21 crc kubenswrapper[4842]: I0202 06:48:21.433328 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:21 crc kubenswrapper[4842]: I0202 06:48:21.433346 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:21 crc kubenswrapper[4842]: E0202 06:48:21.433536 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:48:21 crc kubenswrapper[4842]: I0202 06:48:21.433346 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:21 crc kubenswrapper[4842]: E0202 06:48:21.433754 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:48:21 crc kubenswrapper[4842]: E0202 06:48:21.433899 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:48:23 crc kubenswrapper[4842]: I0202 06:48:23.432927 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:23 crc kubenswrapper[4842]: E0202 06:48:23.433140 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:48:23 crc kubenswrapper[4842]: I0202 06:48:23.433194 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:23 crc kubenswrapper[4842]: I0202 06:48:23.433359 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:23 crc kubenswrapper[4842]: I0202 06:48:23.433272 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:23 crc kubenswrapper[4842]: E0202 06:48:23.433483 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:48:23 crc kubenswrapper[4842]: E0202 06:48:23.433660 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:48:23 crc kubenswrapper[4842]: E0202 06:48:23.433853 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:48:25 crc kubenswrapper[4842]: I0202 06:48:25.432922 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:25 crc kubenswrapper[4842]: I0202 06:48:25.433052 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:25 crc kubenswrapper[4842]: I0202 06:48:25.433090 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:25 crc kubenswrapper[4842]: E0202 06:48:25.434977 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:48:25 crc kubenswrapper[4842]: I0202 06:48:25.435125 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:25 crc kubenswrapper[4842]: E0202 06:48:25.435264 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:48:25 crc kubenswrapper[4842]: E0202 06:48:25.435129 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:48:25 crc kubenswrapper[4842]: E0202 06:48:25.435529 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:48:25 crc kubenswrapper[4842]: E0202 06:48:25.574825 4842 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 06:48:26 crc kubenswrapper[4842]: I0202 06:48:26.433167 4842 scope.go:117] "RemoveContainer" containerID="eb46ef51b68530b7f2b8f5c7e049ebba4820dd4f4f0a8efd0feba8f483ed768d" Feb 02 06:48:27 crc kubenswrapper[4842]: I0202 06:48:27.254511 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gmkx9_c1fd21cd-ea6a-44a0-b136-f338fc97cf18/kube-multus/1.log" Feb 02 06:48:27 crc kubenswrapper[4842]: I0202 06:48:27.254861 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gmkx9" event={"ID":"c1fd21cd-ea6a-44a0-b136-f338fc97cf18","Type":"ContainerStarted","Data":"3b21f8e1a886dde4d1d02d4825a8f34dbf2fb604aa25d226e93ac27f195f2631"} Feb 02 06:48:27 crc kubenswrapper[4842]: I0202 06:48:27.435704 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:27 crc kubenswrapper[4842]: E0202 06:48:27.435851 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:48:27 crc kubenswrapper[4842]: I0202 06:48:27.436060 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:27 crc kubenswrapper[4842]: E0202 06:48:27.436114 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:48:27 crc kubenswrapper[4842]: I0202 06:48:27.436322 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:27 crc kubenswrapper[4842]: E0202 06:48:27.436381 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:48:27 crc kubenswrapper[4842]: I0202 06:48:27.436514 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:27 crc kubenswrapper[4842]: E0202 06:48:27.436597 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:48:29 crc kubenswrapper[4842]: I0202 06:48:29.432692 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:29 crc kubenswrapper[4842]: I0202 06:48:29.432730 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:29 crc kubenswrapper[4842]: I0202 06:48:29.432687 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:29 crc kubenswrapper[4842]: I0202 06:48:29.432840 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:29 crc kubenswrapper[4842]: E0202 06:48:29.432901 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:48:29 crc kubenswrapper[4842]: E0202 06:48:29.433045 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:48:29 crc kubenswrapper[4842]: E0202 06:48:29.433191 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:48:29 crc kubenswrapper[4842]: E0202 06:48:29.433280 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:48:31 crc kubenswrapper[4842]: I0202 06:48:31.432971 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:31 crc kubenswrapper[4842]: I0202 06:48:31.433058 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:31 crc kubenswrapper[4842]: I0202 06:48:31.434118 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:31 crc kubenswrapper[4842]: I0202 06:48:31.434308 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:31 crc kubenswrapper[4842]: I0202 06:48:31.435663 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 02 06:48:31 crc kubenswrapper[4842]: I0202 06:48:31.436107 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 02 06:48:31 crc kubenswrapper[4842]: I0202 06:48:31.437021 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 02 06:48:31 crc kubenswrapper[4842]: I0202 06:48:31.438160 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 02 06:48:31 crc kubenswrapper[4842]: I0202 06:48:31.438586 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 02 06:48:31 crc kubenswrapper[4842]: I0202 06:48:31.438963 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.596738 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.662487 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wjrtc"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.663362 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.664683 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4rp8p"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.665695 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4rp8p" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.666942 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rssw5"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.667710 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.668319 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.669075 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.670881 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5dc9g"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.672132 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.677088 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.677121 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.677256 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.677316 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.677394 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.677406 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.677476 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.677403 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.678414 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.679357 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.678817 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.679954 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.680736 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-qdspj"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.681399 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.681420 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.682594 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.683151 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hj5sv"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.683686 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.685826 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.689779 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.690106 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.695280 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.701372 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.705080 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.705417 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.705783 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.706206 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.706366 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.706544 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.706592 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.706563 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.709665 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.710265 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.712018 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.712313 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.713328 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.713575 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.713799 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.714250 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.714514 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.714879 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.715137 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.715377 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.716367 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.717156 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.722138 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.722598 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.722760 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.723153 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.723404 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.723556 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.729941 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.730500 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.732239 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.732684 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.732827 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.733292 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.733906 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.758321 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.761316 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.761866 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.762107 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.762373 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.762852 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lh2qm"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.763128 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.763263 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.763560 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.769004 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.769452 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.769825 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.770306 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.770520 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.770930 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.786681 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.786941 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.787121 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.787370 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.787761 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.788263 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.788322 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.788622 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.788655 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.789246 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.790120 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n9v5x"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.795259 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.796179 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n9v5x" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.796700 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.796967 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.797019 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.797235 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.797308 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.797398 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.798468 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.798579 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.798720 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.798982 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.799941 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.800036 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.800114 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.800174 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.800334 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.800537 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.800646 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.800886 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.808653 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10f8b640-1372-484f-b42f-97e336fb2992-serving-cert\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.808702 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-etcd-client\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.808725 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-audit-policies\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.808750 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/10f8b640-1372-484f-b42f-97e336fb2992-etcd-client\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.808769 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/10f8b640-1372-484f-b42f-97e336fb2992-encryption-config\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.808788 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.808840 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k76r2\" (UniqueName: \"kubernetes.io/projected/74549f13-263e-4e4f-8331-9f7fd6bf36b3-kube-api-access-k76r2\") pod \"ingress-operator-5b745b69d9-99kbj\" (UID: \"74549f13-263e-4e4f-8331-9f7fd6bf36b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.808869 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45dcaecb-f74e-4eaf-886a-28b6632f8d44-config\") pod \"machine-api-operator-5694c8668f-qdspj\" (UID: \"45dcaecb-f74e-4eaf-886a-28b6632f8d44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.808890 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4367135-ecb4-447d-a89e-5dcbeffe345e-config\") pod \"machine-approver-56656f9798-9xwbf\" (UID: \"e4367135-ecb4-447d-a89e-5dcbeffe345e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.808926 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s6v4\" (UniqueName: \"kubernetes.io/projected/ceaf90b2-229c-4452-8a1b-fd016682bf6e-kube-api-access-7s6v4\") pod \"openshift-apiserver-operator-796bbdcf4f-kmxhp\" (UID: \"ceaf90b2-229c-4452-8a1b-fd016682bf6e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.808950 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.808989 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a1b2909-d542-48b0-8729-294f7950ab2d-config\") pod \"route-controller-manager-6576b87f9c-brh4m\" (UID: \"3a1b2909-d542-48b0-8729-294f7950ab2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809013 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5aa0cd7d-de34-4c00-8eb2-40e35e430b5d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wjrtc\" (UID: \"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809038 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpl2m\" (UniqueName: \"kubernetes.io/projected/e4367135-ecb4-447d-a89e-5dcbeffe345e-kube-api-access-mpl2m\") pod \"machine-approver-56656f9798-9xwbf\" (UID: \"e4367135-ecb4-447d-a89e-5dcbeffe345e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809056 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809077 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e08cb720-1a1d-47c3-a787-c61d377bf2dd-trusted-ca\") pod \"console-operator-58897d9998-4rp8p\" (UID: \"e08cb720-1a1d-47c3-a787-c61d377bf2dd\") " pod="openshift-console-operator/console-operator-58897d9998-4rp8p" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809097 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a1b2909-d542-48b0-8729-294f7950ab2d-client-ca\") pod \"route-controller-manager-6576b87f9c-brh4m\" (UID: \"3a1b2909-d542-48b0-8729-294f7950ab2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809120 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5aa0cd7d-de34-4c00-8eb2-40e35e430b5d-service-ca-bundle\") pod \"authentication-operator-69f744f599-wjrtc\" (UID: \"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809144 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809183 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7352a46-964e-478a-a141-7b1f3d529b85-serving-cert\") pod \"controller-manager-879f6c89f-rssw5\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809210 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/74549f13-263e-4e4f-8331-9f7fd6bf36b3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-99kbj\" (UID: \"74549f13-263e-4e4f-8331-9f7fd6bf36b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809250 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shp4g\" (UniqueName: \"kubernetes.io/projected/5aa0cd7d-de34-4c00-8eb2-40e35e430b5d-kube-api-access-shp4g\") pod \"authentication-operator-69f744f599-wjrtc\" (UID: \"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809269 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-node-pullsecrets\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809290 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-config\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809308 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9f6z\" (UniqueName: \"kubernetes.io/projected/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-kube-api-access-c9f6z\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809330 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmggs\" (UniqueName: \"kubernetes.io/projected/45dcaecb-f74e-4eaf-886a-28b6632f8d44-kube-api-access-xmggs\") pod \"machine-api-operator-5694c8668f-qdspj\" (UID: \"45dcaecb-f74e-4eaf-886a-28b6632f8d44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809349 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-encryption-config\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809371 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j2bb\" (UniqueName: \"kubernetes.io/projected/3a1b2909-d542-48b0-8729-294f7950ab2d-kube-api-access-8j2bb\") pod \"route-controller-manager-6576b87f9c-brh4m\" (UID: \"3a1b2909-d542-48b0-8729-294f7950ab2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809387 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809409 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-config\") pod \"controller-manager-879f6c89f-rssw5\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809429 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c4df1b8-c014-42db-ab26-6ac05f72c8ba-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cd8zk\" (UID: \"7c4df1b8-c014-42db-ab26-6ac05f72c8ba\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809449 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a1b2909-d542-48b0-8729-294f7950ab2d-serving-cert\") pod \"route-controller-manager-6576b87f9c-brh4m\" (UID: \"3a1b2909-d542-48b0-8729-294f7950ab2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809467 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/45dcaecb-f74e-4eaf-886a-28b6632f8d44-images\") pod \"machine-api-operator-5694c8668f-qdspj\" (UID: \"45dcaecb-f74e-4eaf-886a-28b6632f8d44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809490 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-audit-dir\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809511 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809530 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/10f8b640-1372-484f-b42f-97e336fb2992-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809554 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z97ps\" (UniqueName: \"kubernetes.io/projected/7c4df1b8-c014-42db-ab26-6ac05f72c8ba-kube-api-access-z97ps\") pod \"openshift-controller-manager-operator-756b6f6bc6-cd8zk\" (UID: \"7c4df1b8-c014-42db-ab26-6ac05f72c8ba\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809572 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgtct\" (UniqueName: \"kubernetes.io/projected/10f8b640-1372-484f-b42f-97e336fb2992-kube-api-access-sgtct\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809597 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e08cb720-1a1d-47c3-a787-c61d377bf2dd-config\") pod \"console-operator-58897d9998-4rp8p\" (UID: \"e08cb720-1a1d-47c3-a787-c61d377bf2dd\") " pod="openshift-console-operator/console-operator-58897d9998-4rp8p" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809621 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-image-import-ca\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809642 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809659 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74549f13-263e-4e4f-8331-9f7fd6bf36b3-trusted-ca\") pod \"ingress-operator-5b745b69d9-99kbj\" (UID: \"74549f13-263e-4e4f-8331-9f7fd6bf36b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809682 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809734 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/45dcaecb-f74e-4eaf-886a-28b6632f8d44-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-qdspj\" (UID: \"45dcaecb-f74e-4eaf-886a-28b6632f8d44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809753 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809773 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmndw\" (UniqueName: \"kubernetes.io/projected/bf91f3e9-19c2-4f18-b129-41aafd1a1264-kube-api-access-bmndw\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809795 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4df1b8-c014-42db-ab26-6ac05f72c8ba-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cd8zk\" (UID: \"7c4df1b8-c014-42db-ab26-6ac05f72c8ba\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809818 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e4367135-ecb4-447d-a89e-5dcbeffe345e-auth-proxy-config\") pod \"machine-approver-56656f9798-9xwbf\" (UID: \"e4367135-ecb4-447d-a89e-5dcbeffe345e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809834 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-etcd-serving-ca\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809855 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10f8b640-1372-484f-b42f-97e336fb2992-audit-dir\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809877 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpp28\" (UniqueName: \"kubernetes.io/projected/c7352a46-964e-478a-a141-7b1f3d529b85-kube-api-access-wpp28\") pod \"controller-manager-879f6c89f-rssw5\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809898 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-audit\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809929 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa0cd7d-de34-4c00-8eb2-40e35e430b5d-serving-cert\") pod \"authentication-operator-69f744f599-wjrtc\" (UID: \"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809949 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809974 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e4367135-ecb4-447d-a89e-5dcbeffe345e-machine-approver-tls\") pod \"machine-approver-56656f9798-9xwbf\" (UID: \"e4367135-ecb4-447d-a89e-5dcbeffe345e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809993 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74549f13-263e-4e4f-8331-9f7fd6bf36b3-metrics-tls\") pod \"ingress-operator-5b745b69d9-99kbj\" (UID: \"74549f13-263e-4e4f-8331-9f7fd6bf36b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.810014 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sznxk\" (UniqueName: \"kubernetes.io/projected/e08cb720-1a1d-47c3-a787-c61d377bf2dd-kube-api-access-sznxk\") pod \"console-operator-58897d9998-4rp8p\" (UID: \"e08cb720-1a1d-47c3-a787-c61d377bf2dd\") " pod="openshift-console-operator/console-operator-58897d9998-4rp8p" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.810031 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.810078 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-serving-cert\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.810127 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bf91f3e9-19c2-4f18-b129-41aafd1a1264-audit-dir\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.810149 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.810263 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rssw5\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.816129 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-pbtq6"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.817856 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.824486 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10f8b640-1372-484f-b42f-97e336fb2992-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.827461 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa0cd7d-de34-4c00-8eb2-40e35e430b5d-config\") pod \"authentication-operator-69f744f599-wjrtc\" (UID: \"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.827535 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ceaf90b2-229c-4452-8a1b-fd016682bf6e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kmxhp\" (UID: \"ceaf90b2-229c-4452-8a1b-fd016682bf6e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.827605 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceaf90b2-229c-4452-8a1b-fd016682bf6e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kmxhp\" (UID: \"ceaf90b2-229c-4452-8a1b-fd016682bf6e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.827664 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/10f8b640-1372-484f-b42f-97e336fb2992-audit-policies\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.827695 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.827773 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e08cb720-1a1d-47c3-a787-c61d377bf2dd-serving-cert\") pod \"console-operator-58897d9998-4rp8p\" (UID: \"e08cb720-1a1d-47c3-a787-c61d377bf2dd\") " pod="openshift-console-operator/console-operator-58897d9998-4rp8p" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.827810 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-client-ca\") pod \"controller-manager-879f6c89f-rssw5\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.827935 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.828936 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-kmw8f"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.829481 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5wqx2"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.829678 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-pbtq6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.830088 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-5wqx2" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.828936 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.830201 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.829375 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.829443 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.829503 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.829555 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.831389 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.832880 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.833620 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.834292 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.834494 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.834675 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.834764 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.834863 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.834943 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.835073 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.835228 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.835314 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.835431 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.836001 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.836120 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fz9q2"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.836483 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.836917 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.837500 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-j7bfz"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.837963 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.841270 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.842147 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.842519 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.842525 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.843242 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4rp8p"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.844337 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rssw5"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.846413 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.847378 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.849758 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.850113 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.851127 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.852024 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hv9fc"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.852120 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.852465 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hv9fc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.854362 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n9v5x"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.854410 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.855141 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.855328 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.856523 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.857575 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bzsxn"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.858260 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.858332 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnmkq"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.859149 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnmkq" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.859559 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-h6pjl"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.860246 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-h6pjl" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.861323 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-n42rc"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.862016 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n42rc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.862328 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.863009 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.863869 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.864602 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.866802 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.867016 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lh2qm"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.868708 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.891907 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.893513 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.894761 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-m2mqz"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.895994 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.897055 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-qdspj"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.897417 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kgv82"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.899294 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5dc9g"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.899321 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wjrtc"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.899332 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.899346 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.899355 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5wqx2"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.899365 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fz9q2"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.899478 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kgv82" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.900023 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.900384 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.900666 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.901093 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-m2mqz" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.905415 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.909179 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.910445 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnmkq"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.924854 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.925316 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.927027 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.929272 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bzsxn"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.929316 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.929734 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z97ps\" (UniqueName: \"kubernetes.io/projected/7c4df1b8-c014-42db-ab26-6ac05f72c8ba-kube-api-access-z97ps\") pod \"openshift-controller-manager-operator-756b6f6bc6-cd8zk\" (UID: \"7c4df1b8-c014-42db-ab26-6ac05f72c8ba\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.929765 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/27bce4a1-799c-4d40-900c-455eaba28398-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-h6pjl\" (UID: \"27bce4a1-799c-4d40-900c-455eaba28398\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h6pjl" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.929786 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2ee0e33-a160-4303-af00-0b145647f807-config\") pod \"kube-controller-manager-operator-78b949d7b-ck7h4\" (UID: \"f2ee0e33-a160-4303-af00-0b145647f807\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.929802 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2ee0e33-a160-4303-af00-0b145647f807-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ck7h4\" (UID: \"f2ee0e33-a160-4303-af00-0b145647f807\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.929819 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgtct\" (UniqueName: \"kubernetes.io/projected/10f8b640-1372-484f-b42f-97e336fb2992-kube-api-access-sgtct\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.929839 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e08cb720-1a1d-47c3-a787-c61d377bf2dd-config\") pod \"console-operator-58897d9998-4rp8p\" (UID: \"e08cb720-1a1d-47c3-a787-c61d377bf2dd\") " pod="openshift-console-operator/console-operator-58897d9998-4rp8p" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.929855 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-console-config\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.929869 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-service-ca\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.929885 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74549f13-263e-4e4f-8331-9f7fd6bf36b3-trusted-ca\") pod \"ingress-operator-5b745b69d9-99kbj\" (UID: \"74549f13-263e-4e4f-8331-9f7fd6bf36b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.929925 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-trusted-ca-bundle\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.929943 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-image-import-ca\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.929962 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.929987 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa1b5822-c8a6-4fdb-b42f-8a94469a65ef-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-r45fr\" (UID: \"aa1b5822-c8a6-4fdb-b42f-8a94469a65ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930013 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930036 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930070 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/45dcaecb-f74e-4eaf-886a-28b6632f8d44-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-qdspj\" (UID: \"45dcaecb-f74e-4eaf-886a-28b6632f8d44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930090 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmndw\" (UniqueName: \"kubernetes.io/projected/bf91f3e9-19c2-4f18-b129-41aafd1a1264-kube-api-access-bmndw\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930113 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf3383aa-e821-4389-b2f0-cc697ad4cc7a-metrics-tls\") pod \"dns-operator-744455d44c-5wqx2\" (UID: \"bf3383aa-e821-4389-b2f0-cc697ad4cc7a\") " pod="openshift-dns-operator/dns-operator-744455d44c-5wqx2" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930137 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqnfv\" (UniqueName: \"kubernetes.io/projected/cc176201-02a2-46c0-903c-13943d989195-kube-api-access-wqnfv\") pod \"downloads-7954f5f757-pbtq6\" (UID: \"cc176201-02a2-46c0-903c-13943d989195\") " pod="openshift-console/downloads-7954f5f757-pbtq6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930167 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4df1b8-c014-42db-ab26-6ac05f72c8ba-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cd8zk\" (UID: \"7c4df1b8-c014-42db-ab26-6ac05f72c8ba\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930185 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-etcd-serving-ca\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930201 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-oauth-serving-cert\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930255 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e4367135-ecb4-447d-a89e-5dcbeffe345e-auth-proxy-config\") pod \"machine-approver-56656f9798-9xwbf\" (UID: \"e4367135-ecb4-447d-a89e-5dcbeffe345e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930273 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/091908d5-acab-418a-a5f2-fa909294222a-srv-cert\") pod \"catalog-operator-68c6474976-j9jgh\" (UID: \"091908d5-acab-418a-a5f2-fa909294222a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930289 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10f8b640-1372-484f-b42f-97e336fb2992-audit-dir\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930308 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpp28\" (UniqueName: \"kubernetes.io/projected/c7352a46-964e-478a-a141-7b1f3d529b85-kube-api-access-wpp28\") pod \"controller-manager-879f6c89f-rssw5\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930325 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8ll8\" (UniqueName: \"kubernetes.io/projected/5b43b464-5623-46bb-8097-65b505d08960-kube-api-access-p8ll8\") pod \"collect-profiles-29500245-vpjnw\" (UID: \"5b43b464-5623-46bb-8097-65b505d08960\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930341 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-audit\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930361 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa0cd7d-de34-4c00-8eb2-40e35e430b5d-serving-cert\") pod \"authentication-operator-69f744f599-wjrtc\" (UID: \"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930379 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930397 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d69d0f34-1e03-438d-9d97-de945aff185f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hbn7m\" (UID: \"d69d0f34-1e03-438d-9d97-de945aff185f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930415 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e4367135-ecb4-447d-a89e-5dcbeffe345e-machine-approver-tls\") pod \"machine-approver-56656f9798-9xwbf\" (UID: \"e4367135-ecb4-447d-a89e-5dcbeffe345e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930430 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74549f13-263e-4e4f-8331-9f7fd6bf36b3-metrics-tls\") pod \"ingress-operator-5b745b69d9-99kbj\" (UID: \"74549f13-263e-4e4f-8331-9f7fd6bf36b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930448 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sznxk\" (UniqueName: \"kubernetes.io/projected/e08cb720-1a1d-47c3-a787-c61d377bf2dd-kube-api-access-sznxk\") pod \"console-operator-58897d9998-4rp8p\" (UID: \"e08cb720-1a1d-47c3-a787-c61d377bf2dd\") " pod="openshift-console-operator/console-operator-58897d9998-4rp8p" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930467 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd96d668-a9b2-474f-8617-17eca5f01191-config\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930482 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aa1b5822-c8a6-4fdb-b42f-8a94469a65ef-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-r45fr\" (UID: \"aa1b5822-c8a6-4fdb-b42f-8a94469a65ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930501 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930528 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fd96d668-a9b2-474f-8617-17eca5f01191-etcd-client\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930551 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zhgm\" (UniqueName: \"kubernetes.io/projected/42ff05d2-dda3-411f-bcee-816f87ce21b8-kube-api-access-6zhgm\") pod \"machine-config-controller-84d6567774-nz65j\" (UID: \"42ff05d2-dda3-411f-bcee-816f87ce21b8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930574 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-serving-cert\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930590 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bf91f3e9-19c2-4f18-b129-41aafd1a1264-audit-dir\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930606 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930621 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rssw5\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930636 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10f8b640-1372-484f-b42f-97e336fb2992-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930652 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd96d668-a9b2-474f-8617-17eca5f01191-serving-cert\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930667 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa0cd7d-de34-4c00-8eb2-40e35e430b5d-config\") pod \"authentication-operator-69f744f599-wjrtc\" (UID: \"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930683 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ceaf90b2-229c-4452-8a1b-fd016682bf6e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kmxhp\" (UID: \"ceaf90b2-229c-4452-8a1b-fd016682bf6e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930709 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceaf90b2-229c-4452-8a1b-fd016682bf6e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kmxhp\" (UID: \"ceaf90b2-229c-4452-8a1b-fd016682bf6e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930728 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/10f8b640-1372-484f-b42f-97e336fb2992-audit-policies\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930748 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/59990591-2248-489b-bac2-e7cab22482f8-console-serving-cert\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930765 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e08cb720-1a1d-47c3-a787-c61d377bf2dd-serving-cert\") pod \"console-operator-58897d9998-4rp8p\" (UID: \"e08cb720-1a1d-47c3-a787-c61d377bf2dd\") " pod="openshift-console-operator/console-operator-58897d9998-4rp8p" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930780 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-client-ca\") pod \"controller-manager-879f6c89f-rssw5\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930798 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d69d0f34-1e03-438d-9d97-de945aff185f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hbn7m\" (UID: \"d69d0f34-1e03-438d-9d97-de945aff185f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930816 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/42ff05d2-dda3-411f-bcee-816f87ce21b8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nz65j\" (UID: \"42ff05d2-dda3-411f-bcee-816f87ce21b8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930427 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930833 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10f8b640-1372-484f-b42f-97e336fb2992-serving-cert\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930859 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9vml\" (UniqueName: \"kubernetes.io/projected/aa1b5822-c8a6-4fdb-b42f-8a94469a65ef-kube-api-access-g9vml\") pod \"cluster-image-registry-operator-dc59b4c8b-r45fr\" (UID: \"aa1b5822-c8a6-4fdb-b42f-8a94469a65ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930897 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-etcd-client\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930915 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-audit-policies\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930931 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/10f8b640-1372-484f-b42f-97e336fb2992-etcd-client\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930948 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/10f8b640-1372-484f-b42f-97e336fb2992-encryption-config\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930967 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d69d0f34-1e03-438d-9d97-de945aff185f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hbn7m\" (UID: \"d69d0f34-1e03-438d-9d97-de945aff185f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930988 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/59990591-2248-489b-bac2-e7cab22482f8-console-oauth-config\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.931010 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b43b464-5623-46bb-8097-65b505d08960-secret-volume\") pod \"collect-profiles-29500245-vpjnw\" (UID: \"5b43b464-5623-46bb-8097-65b505d08960\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.931029 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.931053 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k76r2\" (UniqueName: \"kubernetes.io/projected/74549f13-263e-4e4f-8331-9f7fd6bf36b3-kube-api-access-k76r2\") pod \"ingress-operator-5b745b69d9-99kbj\" (UID: \"74549f13-263e-4e4f-8331-9f7fd6bf36b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.931898 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-etcd-serving-ca\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.932055 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-client-ca\") pod \"controller-manager-879f6c89f-rssw5\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.932547 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e4367135-ecb4-447d-a89e-5dcbeffe345e-auth-proxy-config\") pod \"machine-approver-56656f9798-9xwbf\" (UID: \"e4367135-ecb4-447d-a89e-5dcbeffe345e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.932700 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.933332 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.933572 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10f8b640-1372-484f-b42f-97e336fb2992-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.933621 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bf91f3e9-19c2-4f18-b129-41aafd1a1264-audit-dir\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.933738 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e08cb720-1a1d-47c3-a787-c61d377bf2dd-config\") pod \"console-operator-58897d9998-4rp8p\" (UID: \"e08cb720-1a1d-47c3-a787-c61d377bf2dd\") " pod="openshift-console-operator/console-operator-58897d9998-4rp8p" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.934074 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.934202 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa0cd7d-de34-4c00-8eb2-40e35e430b5d-config\") pod \"authentication-operator-69f744f599-wjrtc\" (UID: \"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.934780 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rssw5\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.937168 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-etcd-client\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.937855 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-kmw8f"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.940363 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-pbtq6"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.940414 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hj5sv"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.940630 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/10f8b640-1372-484f-b42f-97e336fb2992-encryption-config\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.941657 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-serving-cert\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.941709 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.942006 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10f8b640-1372-484f-b42f-97e336fb2992-serving-cert\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.942045 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.942062 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.942128 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa0cd7d-de34-4c00-8eb2-40e35e430b5d-serving-cert\") pod \"authentication-operator-69f744f599-wjrtc\" (UID: \"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.942247 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-image-import-ca\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.942555 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-audit-policies\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.942612 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45dcaecb-f74e-4eaf-886a-28b6632f8d44-config\") pod \"machine-api-operator-5694c8668f-qdspj\" (UID: \"45dcaecb-f74e-4eaf-886a-28b6632f8d44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.942638 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4367135-ecb4-447d-a89e-5dcbeffe345e-config\") pod \"machine-approver-56656f9798-9xwbf\" (UID: \"e4367135-ecb4-447d-a89e-5dcbeffe345e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.942663 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2ee0e33-a160-4303-af00-0b145647f807-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ck7h4\" (UID: \"f2ee0e33-a160-4303-af00-0b145647f807\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.942686 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpmb2\" (UniqueName: \"kubernetes.io/projected/59990591-2248-489b-bac2-e7cab22482f8-kube-api-access-wpmb2\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.942706 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fd96d668-a9b2-474f-8617-17eca5f01191-etcd-service-ca\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.942823 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10f8b640-1372-484f-b42f-97e336fb2992-audit-dir\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.942873 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceaf90b2-229c-4452-8a1b-fd016682bf6e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kmxhp\" (UID: \"ceaf90b2-229c-4452-8a1b-fd016682bf6e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.942886 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e4367135-ecb4-447d-a89e-5dcbeffe345e-machine-approver-tls\") pod \"machine-approver-56656f9798-9xwbf\" (UID: \"e4367135-ecb4-447d-a89e-5dcbeffe345e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.943164 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.943171 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s6v4\" (UniqueName: \"kubernetes.io/projected/ceaf90b2-229c-4452-8a1b-fd016682bf6e-kube-api-access-7s6v4\") pod \"openshift-apiserver-operator-796bbdcf4f-kmxhp\" (UID: \"ceaf90b2-229c-4452-8a1b-fd016682bf6e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.943368 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.943726 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74549f13-263e-4e4f-8331-9f7fd6bf36b3-trusted-ca\") pod \"ingress-operator-5b745b69d9-99kbj\" (UID: \"74549f13-263e-4e4f-8331-9f7fd6bf36b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.943875 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-audit\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.943728 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4367135-ecb4-447d-a89e-5dcbeffe345e-config\") pod \"machine-approver-56656f9798-9xwbf\" (UID: \"e4367135-ecb4-447d-a89e-5dcbeffe345e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944034 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-n42rc"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944029 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944126 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a1b2909-d542-48b0-8729-294f7950ab2d-config\") pod \"route-controller-manager-6576b87f9c-brh4m\" (UID: \"3a1b2909-d542-48b0-8729-294f7950ab2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944155 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5aa0cd7d-de34-4c00-8eb2-40e35e430b5d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wjrtc\" (UID: \"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944183 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpl2m\" (UniqueName: \"kubernetes.io/projected/e4367135-ecb4-447d-a89e-5dcbeffe345e-kube-api-access-mpl2m\") pod \"machine-approver-56656f9798-9xwbf\" (UID: \"e4367135-ecb4-447d-a89e-5dcbeffe345e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944210 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w98wq\" (UniqueName: \"kubernetes.io/projected/bf3383aa-e821-4389-b2f0-cc697ad4cc7a-kube-api-access-w98wq\") pod \"dns-operator-744455d44c-5wqx2\" (UID: \"bf3383aa-e821-4389-b2f0-cc697ad4cc7a\") " pod="openshift-dns-operator/dns-operator-744455d44c-5wqx2" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944256 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e08cb720-1a1d-47c3-a787-c61d377bf2dd-trusted-ca\") pod \"console-operator-58897d9998-4rp8p\" (UID: \"e08cb720-1a1d-47c3-a787-c61d377bf2dd\") " pod="openshift-console-operator/console-operator-58897d9998-4rp8p" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944276 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a1b2909-d542-48b0-8729-294f7950ab2d-client-ca\") pod \"route-controller-manager-6576b87f9c-brh4m\" (UID: \"3a1b2909-d542-48b0-8729-294f7950ab2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944319 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw98m\" (UniqueName: \"kubernetes.io/projected/091908d5-acab-418a-a5f2-fa909294222a-kube-api-access-bw98m\") pod \"catalog-operator-68c6474976-j9jgh\" (UID: \"091908d5-acab-418a-a5f2-fa909294222a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944341 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5aa0cd7d-de34-4c00-8eb2-40e35e430b5d-service-ca-bundle\") pod \"authentication-operator-69f744f599-wjrtc\" (UID: \"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944365 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944386 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fd96d668-a9b2-474f-8617-17eca5f01191-etcd-ca\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944410 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/42ff05d2-dda3-411f-bcee-816f87ce21b8-proxy-tls\") pod \"machine-config-controller-84d6567774-nz65j\" (UID: \"42ff05d2-dda3-411f-bcee-816f87ce21b8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944456 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kqzj\" (UniqueName: \"kubernetes.io/projected/57b85eac-df63-4c81-abe6-3dba293df9c2-kube-api-access-2kqzj\") pod \"openshift-config-operator-7777fb866f-2mfc5\" (UID: \"57b85eac-df63-4c81-abe6-3dba293df9c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944481 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa1b5822-c8a6-4fdb-b42f-8a94469a65ef-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-r45fr\" (UID: \"aa1b5822-c8a6-4fdb-b42f-8a94469a65ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944516 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7352a46-964e-478a-a141-7b1f3d529b85-serving-cert\") pod \"controller-manager-879f6c89f-rssw5\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944537 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/57b85eac-df63-4c81-abe6-3dba293df9c2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2mfc5\" (UID: \"57b85eac-df63-4c81-abe6-3dba293df9c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944559 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9f6z\" (UniqueName: \"kubernetes.io/projected/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-kube-api-access-c9f6z\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944578 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/74549f13-263e-4e4f-8331-9f7fd6bf36b3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-99kbj\" (UID: \"74549f13-263e-4e4f-8331-9f7fd6bf36b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944629 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shp4g\" (UniqueName: \"kubernetes.io/projected/5aa0cd7d-de34-4c00-8eb2-40e35e430b5d-kube-api-access-shp4g\") pod \"authentication-operator-69f744f599-wjrtc\" (UID: \"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944650 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-node-pullsecrets\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944674 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-config\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944699 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvvqc\" (UniqueName: \"kubernetes.io/projected/fd96d668-a9b2-474f-8617-17eca5f01191-kube-api-access-xvvqc\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944725 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45dcaecb-f74e-4eaf-886a-28b6632f8d44-config\") pod \"machine-api-operator-5694c8668f-qdspj\" (UID: \"45dcaecb-f74e-4eaf-886a-28b6632f8d44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944723 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b43b464-5623-46bb-8097-65b505d08960-config-volume\") pod \"collect-profiles-29500245-vpjnw\" (UID: \"5b43b464-5623-46bb-8097-65b505d08960\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944779 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/091908d5-acab-418a-a5f2-fa909294222a-profile-collector-cert\") pod \"catalog-operator-68c6474976-j9jgh\" (UID: \"091908d5-acab-418a-a5f2-fa909294222a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944802 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57b85eac-df63-4c81-abe6-3dba293df9c2-serving-cert\") pod \"openshift-config-operator-7777fb866f-2mfc5\" (UID: \"57b85eac-df63-4c81-abe6-3dba293df9c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944830 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmggs\" (UniqueName: \"kubernetes.io/projected/45dcaecb-f74e-4eaf-886a-28b6632f8d44-kube-api-access-xmggs\") pod \"machine-api-operator-5694c8668f-qdspj\" (UID: \"45dcaecb-f74e-4eaf-886a-28b6632f8d44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944864 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-encryption-config\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944887 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j2bb\" (UniqueName: \"kubernetes.io/projected/3a1b2909-d542-48b0-8729-294f7950ab2d-kube-api-access-8j2bb\") pod \"route-controller-manager-6576b87f9c-brh4m\" (UID: \"3a1b2909-d542-48b0-8729-294f7950ab2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944908 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944926 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-config\") pod \"controller-manager-879f6c89f-rssw5\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944946 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c4df1b8-c014-42db-ab26-6ac05f72c8ba-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cd8zk\" (UID: \"7c4df1b8-c014-42db-ab26-6ac05f72c8ba\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944973 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dnd7\" (UniqueName: \"kubernetes.io/projected/27bce4a1-799c-4d40-900c-455eaba28398-kube-api-access-2dnd7\") pod \"multus-admission-controller-857f4d67dd-h6pjl\" (UID: \"27bce4a1-799c-4d40-900c-455eaba28398\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h6pjl" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944997 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a1b2909-d542-48b0-8729-294f7950ab2d-serving-cert\") pod \"route-controller-manager-6576b87f9c-brh4m\" (UID: \"3a1b2909-d542-48b0-8729-294f7950ab2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.945015 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/45dcaecb-f74e-4eaf-886a-28b6632f8d44-images\") pod \"machine-api-operator-5694c8668f-qdspj\" (UID: \"45dcaecb-f74e-4eaf-886a-28b6632f8d44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.945032 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-audit-dir\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.945051 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.945070 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/10f8b640-1372-484f-b42f-97e336fb2992-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.945229 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e08cb720-1a1d-47c3-a787-c61d377bf2dd-trusted-ca\") pod \"console-operator-58897d9998-4rp8p\" (UID: \"e08cb720-1a1d-47c3-a787-c61d377bf2dd\") " pod="openshift-console-operator/console-operator-58897d9998-4rp8p" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.945249 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a1b2909-d542-48b0-8729-294f7950ab2d-config\") pod \"route-controller-manager-6576b87f9c-brh4m\" (UID: \"3a1b2909-d542-48b0-8729-294f7950ab2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.945491 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a1b2909-d542-48b0-8729-294f7950ab2d-client-ca\") pod \"route-controller-manager-6576b87f9c-brh4m\" (UID: \"3a1b2909-d542-48b0-8729-294f7950ab2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944382 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.945652 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/10f8b640-1372-484f-b42f-97e336fb2992-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.945493 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.945974 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5aa0cd7d-de34-4c00-8eb2-40e35e430b5d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wjrtc\" (UID: \"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.946023 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-audit-dir\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.946037 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5aa0cd7d-de34-4c00-8eb2-40e35e430b5d-service-ca-bundle\") pod \"authentication-operator-69f744f599-wjrtc\" (UID: \"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.947408 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-node-pullsecrets\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.947651 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.947762 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hv9fc"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.949409 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.950271 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.950737 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/10f8b640-1372-484f-b42f-97e336fb2992-audit-policies\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.950864 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/45dcaecb-f74e-4eaf-886a-28b6632f8d44-images\") pod \"machine-api-operator-5694c8668f-qdspj\" (UID: \"45dcaecb-f74e-4eaf-886a-28b6632f8d44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.950891 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-config\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.950996 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.951021 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.951105 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.951602 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c4df1b8-c014-42db-ab26-6ac05f72c8ba-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cd8zk\" (UID: \"7c4df1b8-c014-42db-ab26-6ac05f72c8ba\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.951896 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-config\") pod \"controller-manager-879f6c89f-rssw5\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.953176 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7352a46-964e-478a-a141-7b1f3d529b85-serving-cert\") pod \"controller-manager-879f6c89f-rssw5\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.953237 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-h6pjl"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.954227 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.955321 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6fhk9"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.956636 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/10f8b640-1372-484f-b42f-97e336fb2992-etcd-client\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.956688 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.956795 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.958202 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-kb6j9"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.958707 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.958776 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kb6j9" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.960764 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kgv82"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.961986 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kb6j9"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.962073 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.963432 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6fhk9"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.964867 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-z2sjd"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.965303 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.965933 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-z2sjd"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.965953 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74549f13-263e-4e4f-8331-9f7fd6bf36b3-metrics-tls\") pod \"ingress-operator-5b745b69d9-99kbj\" (UID: \"74549f13-263e-4e4f-8331-9f7fd6bf36b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.966050 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-z2sjd" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.966377 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a1b2909-d542-48b0-8729-294f7950ab2d-serving-cert\") pod \"route-controller-manager-6576b87f9c-brh4m\" (UID: \"3a1b2909-d542-48b0-8729-294f7950ab2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.967049 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.967343 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/45dcaecb-f74e-4eaf-886a-28b6632f8d44-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-qdspj\" (UID: \"45dcaecb-f74e-4eaf-886a-28b6632f8d44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.967374 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.967445 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.967564 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4df1b8-c014-42db-ab26-6ac05f72c8ba-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cd8zk\" (UID: \"7c4df1b8-c014-42db-ab26-6ac05f72c8ba\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.967881 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e08cb720-1a1d-47c3-a787-c61d377bf2dd-serving-cert\") pod \"console-operator-58897d9998-4rp8p\" (UID: \"e08cb720-1a1d-47c3-a787-c61d377bf2dd\") " pod="openshift-console-operator/console-operator-58897d9998-4rp8p" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.967910 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ceaf90b2-229c-4452-8a1b-fd016682bf6e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kmxhp\" (UID: \"ceaf90b2-229c-4452-8a1b-fd016682bf6e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.968147 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.969972 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-encryption-config\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.985765 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.011296 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.025650 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045579 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045665 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d69d0f34-1e03-438d-9d97-de945aff185f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hbn7m\" (UID: \"d69d0f34-1e03-438d-9d97-de945aff185f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045702 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd96d668-a9b2-474f-8617-17eca5f01191-config\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045722 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aa1b5822-c8a6-4fdb-b42f-8a94469a65ef-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-r45fr\" (UID: \"aa1b5822-c8a6-4fdb-b42f-8a94469a65ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045748 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fd96d668-a9b2-474f-8617-17eca5f01191-etcd-client\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045768 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zhgm\" (UniqueName: \"kubernetes.io/projected/42ff05d2-dda3-411f-bcee-816f87ce21b8-kube-api-access-6zhgm\") pod \"machine-config-controller-84d6567774-nz65j\" (UID: \"42ff05d2-dda3-411f-bcee-816f87ce21b8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045793 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/59990591-2248-489b-bac2-e7cab22482f8-console-serving-cert\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045812 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd96d668-a9b2-474f-8617-17eca5f01191-serving-cert\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045832 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d69d0f34-1e03-438d-9d97-de945aff185f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hbn7m\" (UID: \"d69d0f34-1e03-438d-9d97-de945aff185f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045849 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/42ff05d2-dda3-411f-bcee-816f87ce21b8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nz65j\" (UID: \"42ff05d2-dda3-411f-bcee-816f87ce21b8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045867 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9vml\" (UniqueName: \"kubernetes.io/projected/aa1b5822-c8a6-4fdb-b42f-8a94469a65ef-kube-api-access-g9vml\") pod \"cluster-image-registry-operator-dc59b4c8b-r45fr\" (UID: \"aa1b5822-c8a6-4fdb-b42f-8a94469a65ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045886 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d69d0f34-1e03-438d-9d97-de945aff185f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hbn7m\" (UID: \"d69d0f34-1e03-438d-9d97-de945aff185f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045908 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/59990591-2248-489b-bac2-e7cab22482f8-console-oauth-config\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045926 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b43b464-5623-46bb-8097-65b505d08960-secret-volume\") pod \"collect-profiles-29500245-vpjnw\" (UID: \"5b43b464-5623-46bb-8097-65b505d08960\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045955 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2ee0e33-a160-4303-af00-0b145647f807-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ck7h4\" (UID: \"f2ee0e33-a160-4303-af00-0b145647f807\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045983 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpmb2\" (UniqueName: \"kubernetes.io/projected/59990591-2248-489b-bac2-e7cab22482f8-kube-api-access-wpmb2\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045998 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fd96d668-a9b2-474f-8617-17eca5f01191-etcd-service-ca\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046031 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w98wq\" (UniqueName: \"kubernetes.io/projected/bf3383aa-e821-4389-b2f0-cc697ad4cc7a-kube-api-access-w98wq\") pod \"dns-operator-744455d44c-5wqx2\" (UID: \"bf3383aa-e821-4389-b2f0-cc697ad4cc7a\") " pod="openshift-dns-operator/dns-operator-744455d44c-5wqx2" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046048 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw98m\" (UniqueName: \"kubernetes.io/projected/091908d5-acab-418a-a5f2-fa909294222a-kube-api-access-bw98m\") pod \"catalog-operator-68c6474976-j9jgh\" (UID: \"091908d5-acab-418a-a5f2-fa909294222a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046065 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/42ff05d2-dda3-411f-bcee-816f87ce21b8-proxy-tls\") pod \"machine-config-controller-84d6567774-nz65j\" (UID: \"42ff05d2-dda3-411f-bcee-816f87ce21b8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046080 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fd96d668-a9b2-474f-8617-17eca5f01191-etcd-ca\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046110 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/57b85eac-df63-4c81-abe6-3dba293df9c2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2mfc5\" (UID: \"57b85eac-df63-4c81-abe6-3dba293df9c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046131 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kqzj\" (UniqueName: \"kubernetes.io/projected/57b85eac-df63-4c81-abe6-3dba293df9c2-kube-api-access-2kqzj\") pod \"openshift-config-operator-7777fb866f-2mfc5\" (UID: \"57b85eac-df63-4c81-abe6-3dba293df9c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046148 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa1b5822-c8a6-4fdb-b42f-8a94469a65ef-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-r45fr\" (UID: \"aa1b5822-c8a6-4fdb-b42f-8a94469a65ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046194 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvvqc\" (UniqueName: \"kubernetes.io/projected/fd96d668-a9b2-474f-8617-17eca5f01191-kube-api-access-xvvqc\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046211 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b43b464-5623-46bb-8097-65b505d08960-config-volume\") pod \"collect-profiles-29500245-vpjnw\" (UID: \"5b43b464-5623-46bb-8097-65b505d08960\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046248 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/091908d5-acab-418a-a5f2-fa909294222a-profile-collector-cert\") pod \"catalog-operator-68c6474976-j9jgh\" (UID: \"091908d5-acab-418a-a5f2-fa909294222a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046265 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57b85eac-df63-4c81-abe6-3dba293df9c2-serving-cert\") pod \"openshift-config-operator-7777fb866f-2mfc5\" (UID: \"57b85eac-df63-4c81-abe6-3dba293df9c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046292 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dnd7\" (UniqueName: \"kubernetes.io/projected/27bce4a1-799c-4d40-900c-455eaba28398-kube-api-access-2dnd7\") pod \"multus-admission-controller-857f4d67dd-h6pjl\" (UID: \"27bce4a1-799c-4d40-900c-455eaba28398\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h6pjl" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046310 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2ee0e33-a160-4303-af00-0b145647f807-config\") pod \"kube-controller-manager-operator-78b949d7b-ck7h4\" (UID: \"f2ee0e33-a160-4303-af00-0b145647f807\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046327 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2ee0e33-a160-4303-af00-0b145647f807-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ck7h4\" (UID: \"f2ee0e33-a160-4303-af00-0b145647f807\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046351 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/27bce4a1-799c-4d40-900c-455eaba28398-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-h6pjl\" (UID: \"27bce4a1-799c-4d40-900c-455eaba28398\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h6pjl" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046372 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-service-ca\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046388 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-console-config\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046406 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-trusted-ca-bundle\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046428 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa1b5822-c8a6-4fdb-b42f-8a94469a65ef-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-r45fr\" (UID: \"aa1b5822-c8a6-4fdb-b42f-8a94469a65ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046458 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf3383aa-e821-4389-b2f0-cc697ad4cc7a-metrics-tls\") pod \"dns-operator-744455d44c-5wqx2\" (UID: \"bf3383aa-e821-4389-b2f0-cc697ad4cc7a\") " pod="openshift-dns-operator/dns-operator-744455d44c-5wqx2" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046476 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqnfv\" (UniqueName: \"kubernetes.io/projected/cc176201-02a2-46c0-903c-13943d989195-kube-api-access-wqnfv\") pod \"downloads-7954f5f757-pbtq6\" (UID: \"cc176201-02a2-46c0-903c-13943d989195\") " pod="openshift-console/downloads-7954f5f757-pbtq6" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046495 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-oauth-serving-cert\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046512 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/091908d5-acab-418a-a5f2-fa909294222a-srv-cert\") pod \"catalog-operator-68c6474976-j9jgh\" (UID: \"091908d5-acab-418a-a5f2-fa909294222a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046527 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8ll8\" (UniqueName: \"kubernetes.io/projected/5b43b464-5623-46bb-8097-65b505d08960-kube-api-access-p8ll8\") pod \"collect-profiles-29500245-vpjnw\" (UID: \"5b43b464-5623-46bb-8097-65b505d08960\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046698 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd96d668-a9b2-474f-8617-17eca5f01191-config\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046952 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fd96d668-a9b2-474f-8617-17eca5f01191-etcd-service-ca\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.047393 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/42ff05d2-dda3-411f-bcee-816f87ce21b8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nz65j\" (UID: \"42ff05d2-dda3-411f-bcee-816f87ce21b8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.047788 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-console-config\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.048081 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/57b85eac-df63-4c81-abe6-3dba293df9c2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2mfc5\" (UID: \"57b85eac-df63-4c81-abe6-3dba293df9c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.048754 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/59990591-2248-489b-bac2-e7cab22482f8-console-serving-cert\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.048754 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fd96d668-a9b2-474f-8617-17eca5f01191-etcd-ca\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.048776 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-service-ca\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.049427 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-trusted-ca-bundle\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.049607 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd96d668-a9b2-474f-8617-17eca5f01191-serving-cert\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.049807 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/59990591-2248-489b-bac2-e7cab22482f8-console-oauth-config\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.049909 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf3383aa-e821-4389-b2f0-cc697ad4cc7a-metrics-tls\") pod \"dns-operator-744455d44c-5wqx2\" (UID: \"bf3383aa-e821-4389-b2f0-cc697ad4cc7a\") " pod="openshift-dns-operator/dns-operator-744455d44c-5wqx2" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.050749 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57b85eac-df63-4c81-abe6-3dba293df9c2-serving-cert\") pod \"openshift-config-operator-7777fb866f-2mfc5\" (UID: \"57b85eac-df63-4c81-abe6-3dba293df9c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.051856 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fd96d668-a9b2-474f-8617-17eca5f01191-etcd-client\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.065858 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.071122 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-oauth-serving-cert\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.086075 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.125063 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.151633 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.165599 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.186038 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.206382 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.225762 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.246464 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.278560 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.286017 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.288726 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa1b5822-c8a6-4fdb-b42f-8a94469a65ef-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-r45fr\" (UID: \"aa1b5822-c8a6-4fdb-b42f-8a94469a65ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.305977 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.326629 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.345995 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.366576 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.386408 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.406037 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.426145 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.434430 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa1b5822-c8a6-4fdb-b42f-8a94469a65ef-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-r45fr\" (UID: \"aa1b5822-c8a6-4fdb-b42f-8a94469a65ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.446520 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.471532 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.486128 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.506397 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.520834 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2ee0e33-a160-4303-af00-0b145647f807-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ck7h4\" (UID: \"f2ee0e33-a160-4303-af00-0b145647f807\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.525688 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.529591 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2ee0e33-a160-4303-af00-0b145647f807-config\") pod \"kube-controller-manager-operator-78b949d7b-ck7h4\" (UID: \"f2ee0e33-a160-4303-af00-0b145647f807\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.545835 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.566051 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.570912 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d69d0f34-1e03-438d-9d97-de945aff185f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hbn7m\" (UID: \"d69d0f34-1e03-438d-9d97-de945aff185f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.585669 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.606298 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.617420 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d69d0f34-1e03-438d-9d97-de945aff185f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hbn7m\" (UID: \"d69d0f34-1e03-438d-9d97-de945aff185f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.626030 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.646175 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.667710 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.673363 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/091908d5-acab-418a-a5f2-fa909294222a-srv-cert\") pod \"catalog-operator-68c6474976-j9jgh\" (UID: \"091908d5-acab-418a-a5f2-fa909294222a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.687657 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.703043 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b43b464-5623-46bb-8097-65b505d08960-secret-volume\") pod \"collect-profiles-29500245-vpjnw\" (UID: \"5b43b464-5623-46bb-8097-65b505d08960\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.703202 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/091908d5-acab-418a-a5f2-fa909294222a-profile-collector-cert\") pod \"catalog-operator-68c6474976-j9jgh\" (UID: \"091908d5-acab-418a-a5f2-fa909294222a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.706486 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.726258 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.747025 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.766443 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.786926 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.806898 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.826502 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.846307 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.863986 4842 request.go:700] Waited for 1.005459468s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-dockercfg-5nsgg&limit=500&resourceVersion=0 Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.866082 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.887006 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.918326 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.927059 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.946256 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.966689 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.985738 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.006568 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.012476 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/27bce4a1-799c-4d40-900c-455eaba28398-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-h6pjl\" (UID: \"27bce4a1-799c-4d40-900c-455eaba28398\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h6pjl" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.027466 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 02 06:48:41 crc kubenswrapper[4842]: E0202 06:48:41.046897 4842 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Feb 02 06:48:41 crc kubenswrapper[4842]: E0202 06:48:41.047054 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5b43b464-5623-46bb-8097-65b505d08960-config-volume podName:5b43b464-5623-46bb-8097-65b505d08960 nodeName:}" failed. No retries permitted until 2026-02-02 06:48:41.547014861 +0000 UTC m=+146.924282813 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5b43b464-5623-46bb-8097-65b505d08960-config-volume") pod "collect-profiles-29500245-vpjnw" (UID: "5b43b464-5623-46bb-8097-65b505d08960") : failed to sync configmap cache: timed out waiting for the condition Feb 02 06:48:41 crc kubenswrapper[4842]: E0202 06:48:41.046905 4842 secret.go:188] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 02 06:48:41 crc kubenswrapper[4842]: E0202 06:48:41.047268 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42ff05d2-dda3-411f-bcee-816f87ce21b8-proxy-tls podName:42ff05d2-dda3-411f-bcee-816f87ce21b8 nodeName:}" failed. No retries permitted until 2026-02-02 06:48:41.547194265 +0000 UTC m=+146.924462217 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/42ff05d2-dda3-411f-bcee-816f87ce21b8-proxy-tls") pod "machine-config-controller-84d6567774-nz65j" (UID: "42ff05d2-dda3-411f-bcee-816f87ce21b8") : failed to sync secret cache: timed out waiting for the condition Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.047705 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.104638 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.104740 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.106120 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.126346 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.145560 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.165780 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.186689 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.225674 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.246134 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.265760 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.285875 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.307830 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.326993 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.346589 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.366744 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.385984 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.405979 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.426779 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.457587 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.465849 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.486738 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.506155 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.526625 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.579517 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sznxk\" (UniqueName: \"kubernetes.io/projected/e08cb720-1a1d-47c3-a787-c61d377bf2dd-kube-api-access-sznxk\") pod \"console-operator-58897d9998-4rp8p\" (UID: \"e08cb720-1a1d-47c3-a787-c61d377bf2dd\") " pod="openshift-console-operator/console-operator-58897d9998-4rp8p" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.595900 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z97ps\" (UniqueName: \"kubernetes.io/projected/7c4df1b8-c014-42db-ab26-6ac05f72c8ba-kube-api-access-z97ps\") pod \"openshift-controller-manager-operator-756b6f6bc6-cd8zk\" (UID: \"7c4df1b8-c014-42db-ab26-6ac05f72c8ba\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.606198 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgtct\" (UniqueName: \"kubernetes.io/projected/10f8b640-1372-484f-b42f-97e336fb2992-kube-api-access-sgtct\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.611011 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/42ff05d2-dda3-411f-bcee-816f87ce21b8-proxy-tls\") pod \"machine-config-controller-84d6567774-nz65j\" (UID: \"42ff05d2-dda3-411f-bcee-816f87ce21b8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.611092 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b43b464-5623-46bb-8097-65b505d08960-config-volume\") pod \"collect-profiles-29500245-vpjnw\" (UID: \"5b43b464-5623-46bb-8097-65b505d08960\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.611759 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b43b464-5623-46bb-8097-65b505d08960-config-volume\") pod \"collect-profiles-29500245-vpjnw\" (UID: \"5b43b464-5623-46bb-8097-65b505d08960\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.616809 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/42ff05d2-dda3-411f-bcee-816f87ce21b8-proxy-tls\") pod \"machine-config-controller-84d6567774-nz65j\" (UID: \"42ff05d2-dda3-411f-bcee-816f87ce21b8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.629061 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k76r2\" (UniqueName: \"kubernetes.io/projected/74549f13-263e-4e4f-8331-9f7fd6bf36b3-kube-api-access-k76r2\") pod \"ingress-operator-5b745b69d9-99kbj\" (UID: \"74549f13-263e-4e4f-8331-9f7fd6bf36b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.638774 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpp28\" (UniqueName: \"kubernetes.io/projected/c7352a46-964e-478a-a141-7b1f3d529b85-kube-api-access-wpp28\") pod \"controller-manager-879f6c89f-rssw5\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.642463 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.675049 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmndw\" (UniqueName: \"kubernetes.io/projected/bf91f3e9-19c2-4f18-b129-41aafd1a1264-kube-api-access-bmndw\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.675346 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.678743 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s6v4\" (UniqueName: \"kubernetes.io/projected/ceaf90b2-229c-4452-8a1b-fd016682bf6e-kube-api-access-7s6v4\") pod \"openshift-apiserver-operator-796bbdcf4f-kmxhp\" (UID: \"ceaf90b2-229c-4452-8a1b-fd016682bf6e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.700571 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/74549f13-263e-4e4f-8331-9f7fd6bf36b3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-99kbj\" (UID: \"74549f13-263e-4e4f-8331-9f7fd6bf36b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.725414 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.731845 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9f6z\" (UniqueName: \"kubernetes.io/projected/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-kube-api-access-c9f6z\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.756404 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shp4g\" (UniqueName: \"kubernetes.io/projected/5aa0cd7d-de34-4c00-8eb2-40e35e430b5d-kube-api-access-shp4g\") pod \"authentication-operator-69f744f599-wjrtc\" (UID: \"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.771054 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpl2m\" (UniqueName: \"kubernetes.io/projected/e4367135-ecb4-447d-a89e-5dcbeffe345e-kube-api-access-mpl2m\") pod \"machine-approver-56656f9798-9xwbf\" (UID: \"e4367135-ecb4-447d-a89e-5dcbeffe345e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.789769 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j2bb\" (UniqueName: \"kubernetes.io/projected/3a1b2909-d542-48b0-8729-294f7950ab2d-kube-api-access-8j2bb\") pod \"route-controller-manager-6576b87f9c-brh4m\" (UID: \"3a1b2909-d542-48b0-8729-294f7950ab2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.790159 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.806944 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.809208 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.809610 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmggs\" (UniqueName: \"kubernetes.io/projected/45dcaecb-f74e-4eaf-886a-28b6632f8d44-kube-api-access-xmggs\") pod \"machine-api-operator-5694c8668f-qdspj\" (UID: \"45dcaecb-f74e-4eaf-886a-28b6632f8d44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.826552 4842 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.846381 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.864727 4842 request.go:700] Waited for 1.905735192s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.866268 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4rp8p" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.866387 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.881589 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.886068 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.892389 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.901687 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.905674 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.926293 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.948362 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.951823 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.963151 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.969711 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.971576 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.986867 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.990376 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj"] Feb 02 06:48:41 crc kubenswrapper[4842]: W0202 06:48:41.994687 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4367135_ecb4_447d_a89e_5dcbeffe345e.slice/crio-8f87e94e972949701dd8325de9ff009c37cf799b868dbf647e6ca97c08949096 WatchSource:0}: Error finding container 8f87e94e972949701dd8325de9ff009c37cf799b868dbf647e6ca97c08949096: Status 404 returned error can't find the container with id 8f87e94e972949701dd8325de9ff009c37cf799b868dbf647e6ca97c08949096 Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.010787 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.029440 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9vml\" (UniqueName: \"kubernetes.io/projected/aa1b5822-c8a6-4fdb-b42f-8a94469a65ef-kube-api-access-g9vml\") pod \"cluster-image-registry-operator-dc59b4c8b-r45fr\" (UID: \"aa1b5822-c8a6-4fdb-b42f-8a94469a65ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.038599 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zhgm\" (UniqueName: \"kubernetes.io/projected/42ff05d2-dda3-411f-bcee-816f87ce21b8-kube-api-access-6zhgm\") pod \"machine-config-controller-84d6567774-nz65j\" (UID: \"42ff05d2-dda3-411f-bcee-816f87ce21b8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j" Feb 02 06:48:42 crc kubenswrapper[4842]: W0202 06:48:42.049575 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10f8b640_1372_484f_b42f_97e336fb2992.slice/crio-b57bbd2d09ffe27c90b7d58b6c0369ba5185d430aaf719dc0615bea6aa6af56a WatchSource:0}: Error finding container b57bbd2d09ffe27c90b7d58b6c0369ba5185d430aaf719dc0615bea6aa6af56a: Status 404 returned error can't find the container with id b57bbd2d09ffe27c90b7d58b6c0369ba5185d430aaf719dc0615bea6aa6af56a Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.060396 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5dc9g"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.061270 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aa1b5822-c8a6-4fdb-b42f-8a94469a65ef-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-r45fr\" (UID: \"aa1b5822-c8a6-4fdb-b42f-8a94469a65ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.084521 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d69d0f34-1e03-438d-9d97-de945aff185f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hbn7m\" (UID: \"d69d0f34-1e03-438d-9d97-de945aff185f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.103182 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kqzj\" (UniqueName: \"kubernetes.io/projected/57b85eac-df63-4c81-abe6-3dba293df9c2-kube-api-access-2kqzj\") pod \"openshift-config-operator-7777fb866f-2mfc5\" (UID: \"57b85eac-df63-4c81-abe6-3dba293df9c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.117131 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.121000 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w98wq\" (UniqueName: \"kubernetes.io/projected/bf3383aa-e821-4389-b2f0-cc697ad4cc7a-kube-api-access-w98wq\") pod \"dns-operator-744455d44c-5wqx2\" (UID: \"bf3383aa-e821-4389-b2f0-cc697ad4cc7a\") " pod="openshift-dns-operator/dns-operator-744455d44c-5wqx2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.137468 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.140523 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw98m\" (UniqueName: \"kubernetes.io/projected/091908d5-acab-418a-a5f2-fa909294222a-kube-api-access-bw98m\") pod \"catalog-operator-68c6474976-j9jgh\" (UID: \"091908d5-acab-418a-a5f2-fa909294222a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.147241 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.147283 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 06:48:42 crc kubenswrapper[4842]: W0202 06:48:42.152409 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c4df1b8_c014_42db_ab26_6ac05f72c8ba.slice/crio-1fc683608d36d02d3c937e8c1674591c83ce13f8c73baa4d09c561910ea81503 WatchSource:0}: Error finding container 1fc683608d36d02d3c937e8c1674591c83ce13f8c73baa4d09c561910ea81503: Status 404 returned error can't find the container with id 1fc683608d36d02d3c937e8c1674591c83ce13f8c73baa4d09c561910ea81503 Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.165570 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8ll8\" (UniqueName: \"kubernetes.io/projected/5b43b464-5623-46bb-8097-65b505d08960-kube-api-access-p8ll8\") pod \"collect-profiles-29500245-vpjnw\" (UID: \"5b43b464-5623-46bb-8097-65b505d08960\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.172753 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4rp8p"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.175569 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.187761 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.188536 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvvqc\" (UniqueName: \"kubernetes.io/projected/fd96d668-a9b2-474f-8617-17eca5f01191-kube-api-access-xvvqc\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:42 crc kubenswrapper[4842]: W0202 06:48:42.197142 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode08cb720_1a1d_47c3_a787_c61d377bf2dd.slice/crio-1ac6d4b77f93f638344c658e47ff5f6af1d09ed7235d2813e462e9c82adf25dc WatchSource:0}: Error finding container 1ac6d4b77f93f638344c658e47ff5f6af1d09ed7235d2813e462e9c82adf25dc: Status 404 returned error can't find the container with id 1ac6d4b77f93f638344c658e47ff5f6af1d09ed7235d2813e462e9c82adf25dc Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.199103 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-qdspj"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.206674 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpmb2\" (UniqueName: \"kubernetes.io/projected/59990591-2248-489b-bac2-e7cab22482f8-kube-api-access-wpmb2\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.222634 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2ee0e33-a160-4303-af00-0b145647f807-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ck7h4\" (UID: \"f2ee0e33-a160-4303-af00-0b145647f807\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4" Feb 02 06:48:42 crc kubenswrapper[4842]: W0202 06:48:42.248801 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45dcaecb_f74e_4eaf_886a_28b6632f8d44.slice/crio-c164c2fb6171110ceac1a578d427c26e9602d9a750688fb911191639197ea84c WatchSource:0}: Error finding container c164c2fb6171110ceac1a578d427c26e9602d9a750688fb911191639197ea84c: Status 404 returned error can't find the container with id c164c2fb6171110ceac1a578d427c26e9602d9a750688fb911191639197ea84c Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.251070 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rssw5"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.256741 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.258982 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dnd7\" (UniqueName: \"kubernetes.io/projected/27bce4a1-799c-4d40-900c-455eaba28398-kube-api-access-2dnd7\") pod \"multus-admission-controller-857f4d67dd-h6pjl\" (UID: \"27bce4a1-799c-4d40-900c-455eaba28398\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h6pjl" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.265936 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.270578 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqnfv\" (UniqueName: \"kubernetes.io/projected/cc176201-02a2-46c0-903c-13943d989195-kube-api-access-wqnfv\") pod \"downloads-7954f5f757-pbtq6\" (UID: \"cc176201-02a2-46c0-903c-13943d989195\") " pod="openshift-console/downloads-7954f5f757-pbtq6" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.284082 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.310418 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wjrtc"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.333839 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.340698 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hj5sv"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.341291 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4snk\" (UniqueName: \"kubernetes.io/projected/bc8e3a2f-b630-40bf-865e-c7a035385730-kube-api-access-z4snk\") pod \"service-ca-operator-777779d784-n42rc\" (UID: \"bc8e3a2f-b630-40bf-865e-c7a035385730\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n42rc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.341321 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e95addab-99c5-499c-92bc-f13fd4870710-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-n9v5x\" (UID: \"e95addab-99c5-499c-92bc-f13fd4870710\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n9v5x" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.341341 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bzsxn\" (UID: \"c4f753a1-ecf0-4b2c-9121-989677c6b2a6\") " pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.341369 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-bound-sa-token\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.341403 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.341422 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrmkg\" (UniqueName: \"kubernetes.io/projected/e95addab-99c5-499c-92bc-f13fd4870710-kube-api-access-qrmkg\") pod \"cluster-samples-operator-665b6dd947-n9v5x\" (UID: \"e95addab-99c5-499c-92bc-f13fd4870710\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n9v5x" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.341450 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9f265e28-d9d2-43db-b43b-8f7d778b2fa5-signing-key\") pod \"service-ca-9c57cc56f-hv9fc\" (UID: \"9f265e28-d9d2-43db-b43b-8f7d778b2fa5\") " pod="openshift-service-ca/service-ca-9c57cc56f-hv9fc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.341467 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/23594203-b17a-4d98-95da-a7c0e3a2ef4e-stats-auth\") pod \"router-default-5444994796-j7bfz\" (UID: \"23594203-b17a-4d98-95da-a7c0e3a2ef4e\") " pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.341484 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/99922ba3-dd03-4c94-9663-9c530f7b3ad0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gnmkq\" (UID: \"99922ba3-dd03-4c94-9663-9c530f7b3ad0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnmkq" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.341500 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b76f3bc4-4824-422b-a14a-e7cd193ed30d-registry-certificates\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.341527 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc8e3a2f-b630-40bf-865e-c7a035385730-serving-cert\") pod \"service-ca-operator-777779d784-n42rc\" (UID: \"bc8e3a2f-b630-40bf-865e-c7a035385730\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n42rc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.341546 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58h4c\" (UniqueName: \"kubernetes.io/projected/99922ba3-dd03-4c94-9663-9c530f7b3ad0-kube-api-access-58h4c\") pod \"control-plane-machine-set-operator-78cbb6b69f-gnmkq\" (UID: \"99922ba3-dd03-4c94-9663-9c530f7b3ad0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnmkq" Feb 02 06:48:42 crc kubenswrapper[4842]: E0202 06:48:42.341908 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:42.841894672 +0000 UTC m=+148.219162584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.341581 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gtlm\" (UniqueName: \"kubernetes.io/projected/23594203-b17a-4d98-95da-a7c0e3a2ef4e-kube-api-access-7gtlm\") pod \"router-default-5444994796-j7bfz\" (UID: \"23594203-b17a-4d98-95da-a7c0e3a2ef4e\") " pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.343318 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zn7j9\" (UID: \"9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.343358 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8-config\") pod \"kube-apiserver-operator-766d6c64bb-zn7j9\" (UID: \"9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.343382 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc8e3a2f-b630-40bf-865e-c7a035385730-config\") pod \"service-ca-operator-777779d784-n42rc\" (UID: \"bc8e3a2f-b630-40bf-865e-c7a035385730\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n42rc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.343404 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krdtw\" (UniqueName: \"kubernetes.io/projected/29629b99-9606-4830-9623-8c81cecbd0a9-kube-api-access-krdtw\") pod \"package-server-manager-789f6589d5-wv68j\" (UID: \"29629b99-9606-4830-9623-8c81cecbd0a9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.343446 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23594203-b17a-4d98-95da-a7c0e3a2ef4e-metrics-certs\") pod \"router-default-5444994796-j7bfz\" (UID: \"23594203-b17a-4d98-95da-a7c0e3a2ef4e\") " pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.343461 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b76f3bc4-4824-422b-a14a-e7cd193ed30d-trusted-ca\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.343475 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/23594203-b17a-4d98-95da-a7c0e3a2ef4e-default-certificate\") pod \"router-default-5444994796-j7bfz\" (UID: \"23594203-b17a-4d98-95da-a7c0e3a2ef4e\") " pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.343509 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23594203-b17a-4d98-95da-a7c0e3a2ef4e-service-ca-bundle\") pod \"router-default-5444994796-j7bfz\" (UID: \"23594203-b17a-4d98-95da-a7c0e3a2ef4e\") " pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.343527 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwwsr\" (UniqueName: \"kubernetes.io/projected/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-kube-api-access-pwwsr\") pod \"marketplace-operator-79b997595-bzsxn\" (UID: \"c4f753a1-ecf0-4b2c-9121-989677c6b2a6\") " pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.343644 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bzsxn\" (UID: \"c4f753a1-ecf0-4b2c-9121-989677c6b2a6\") " pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.343666 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b76f3bc4-4824-422b-a14a-e7cd193ed30d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.343807 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zn7j9\" (UID: \"9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.343905 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9f265e28-d9d2-43db-b43b-8f7d778b2fa5-signing-cabundle\") pod \"service-ca-9c57cc56f-hv9fc\" (UID: \"9f265e28-d9d2-43db-b43b-8f7d778b2fa5\") " pod="openshift-service-ca/service-ca-9c57cc56f-hv9fc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.344123 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/29629b99-9606-4830-9623-8c81cecbd0a9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wv68j\" (UID: \"29629b99-9606-4830-9623-8c81cecbd0a9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.344150 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdnwc\" (UniqueName: \"kubernetes.io/projected/9f265e28-d9d2-43db-b43b-8f7d778b2fa5-kube-api-access-wdnwc\") pod \"service-ca-9c57cc56f-hv9fc\" (UID: \"9f265e28-d9d2-43db-b43b-8f7d778b2fa5\") " pod="openshift-service-ca/service-ca-9c57cc56f-hv9fc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.344251 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-registry-tls\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.344310 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b76f3bc4-4824-422b-a14a-e7cd193ed30d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.344358 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjbqr\" (UniqueName: \"kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-kube-api-access-tjbqr\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.372413 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" event={"ID":"c7352a46-964e-478a-a141-7b1f3d529b85","Type":"ContainerStarted","Data":"44ebd0c802db6062893241169e4706979097a692764a061e2fde6a02c71197ca"} Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.373390 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" event={"ID":"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe","Type":"ContainerStarted","Data":"1810ccd323bfce1d8d33adb40473a3ade0c6cc4b2982aa8de512e861bebf9e9f"} Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.374644 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" event={"ID":"74549f13-263e-4e4f-8331-9f7fd6bf36b3","Type":"ContainerStarted","Data":"3785bb331ff60311311b350a2e5064a83ff8c02ccc368737bd311989b3d76b5b"} Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.374697 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" event={"ID":"74549f13-263e-4e4f-8331-9f7fd6bf36b3","Type":"ContainerStarted","Data":"4784625b0f83e3fea7414409f770b17f45d8471eb978a04de27ca3b0b1a07a11"} Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.375691 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4rp8p" event={"ID":"e08cb720-1a1d-47c3-a787-c61d377bf2dd","Type":"ContainerStarted","Data":"1ac6d4b77f93f638344c658e47ff5f6af1d09ed7235d2813e462e9c82adf25dc"} Feb 02 06:48:42 crc kubenswrapper[4842]: W0202 06:48:42.376735 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5aa0cd7d_de34_4c00_8eb2_40e35e430b5d.slice/crio-12fa26e22eeaf69b0062d177a21558837de011ed6da5184d7f1750e5b3ea0dd6 WatchSource:0}: Error finding container 12fa26e22eeaf69b0062d177a21558837de011ed6da5184d7f1750e5b3ea0dd6: Status 404 returned error can't find the container with id 12fa26e22eeaf69b0062d177a21558837de011ed6da5184d7f1750e5b3ea0dd6 Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.377350 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" event={"ID":"10f8b640-1372-484f-b42f-97e336fb2992","Type":"ContainerStarted","Data":"b57bbd2d09ffe27c90b7d58b6c0369ba5185d430aaf719dc0615bea6aa6af56a"} Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.379075 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk" event={"ID":"7c4df1b8-c014-42db-ab26-6ac05f72c8ba","Type":"ContainerStarted","Data":"1fc683608d36d02d3c937e8c1674591c83ce13f8c73baa4d09c561910ea81503"} Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.379987 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" event={"ID":"e4367135-ecb4-447d-a89e-5dcbeffe345e","Type":"ContainerStarted","Data":"8f87e94e972949701dd8325de9ff009c37cf799b868dbf647e6ca97c08949096"} Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.381077 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" event={"ID":"45dcaecb-f74e-4eaf-886a-28b6632f8d44","Type":"ContainerStarted","Data":"c164c2fb6171110ceac1a578d427c26e9602d9a750688fb911191639197ea84c"} Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.394430 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.413942 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-5wqx2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.414652 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-pbtq6" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.415125 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.436160 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.437313 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.445572 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.445706 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:42 crc kubenswrapper[4842]: E0202 06:48:42.447788 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:42.947764361 +0000 UTC m=+148.325032273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.447811 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c976fbc-6a91-494d-8d9e-1abe8119acf9-config-volume\") pod \"dns-default-z2sjd\" (UID: \"3c976fbc-6a91-494d-8d9e-1abe8119acf9\") " pod="openshift-dns/dns-default-z2sjd" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.447849 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tm72\" (UniqueName: \"kubernetes.io/projected/a8cad1e4-b070-477e-a20a-5cf8cb397e85-kube-api-access-6tm72\") pod \"machine-config-operator-74547568cd-w66ps\" (UID: \"a8cad1e4-b070-477e-a20a-5cf8cb397e85\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448203 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbqhp\" (UniqueName: \"kubernetes.io/projected/3c976fbc-6a91-494d-8d9e-1abe8119acf9-kube-api-access-pbqhp\") pod \"dns-default-z2sjd\" (UID: \"3c976fbc-6a91-494d-8d9e-1abe8119acf9\") " pod="openshift-dns/dns-default-z2sjd" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448248 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/966b8965-4dbb-4735-9564-eac0652fa990-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rx6hm\" (UID: \"966b8965-4dbb-4735-9564-eac0652fa990\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448300 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gtlm\" (UniqueName: \"kubernetes.io/projected/23594203-b17a-4d98-95da-a7c0e3a2ef4e-kube-api-access-7gtlm\") pod \"router-default-5444994796-j7bfz\" (UID: \"23594203-b17a-4d98-95da-a7c0e3a2ef4e\") " pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448342 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zn7j9\" (UID: \"9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448359 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/90441cdf-d9ad-48d8-a400-9c770bc81a60-plugins-dir\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448386 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8-config\") pod \"kube-apiserver-operator-766d6c64bb-zn7j9\" (UID: \"9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448432 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc8e3a2f-b630-40bf-865e-c7a035385730-config\") pod \"service-ca-operator-777779d784-n42rc\" (UID: \"bc8e3a2f-b630-40bf-865e-c7a035385730\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n42rc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448476 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krdtw\" (UniqueName: \"kubernetes.io/projected/29629b99-9606-4830-9623-8c81cecbd0a9-kube-api-access-krdtw\") pod \"package-server-manager-789f6589d5-wv68j\" (UID: \"29629b99-9606-4830-9623-8c81cecbd0a9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448517 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23594203-b17a-4d98-95da-a7c0e3a2ef4e-metrics-certs\") pod \"router-default-5444994796-j7bfz\" (UID: \"23594203-b17a-4d98-95da-a7c0e3a2ef4e\") " pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448550 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/90441cdf-d9ad-48d8-a400-9c770bc81a60-mountpoint-dir\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448576 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b76f3bc4-4824-422b-a14a-e7cd193ed30d-trusted-ca\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448593 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/23594203-b17a-4d98-95da-a7c0e3a2ef4e-default-certificate\") pod \"router-default-5444994796-j7bfz\" (UID: \"23594203-b17a-4d98-95da-a7c0e3a2ef4e\") " pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448609 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23594203-b17a-4d98-95da-a7c0e3a2ef4e-service-ca-bundle\") pod \"router-default-5444994796-j7bfz\" (UID: \"23594203-b17a-4d98-95da-a7c0e3a2ef4e\") " pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448625 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1b0e61a0-72dd-4edd-8217-c7b157e2c38c-apiservice-cert\") pod \"packageserver-d55dfcdfc-n6n4t\" (UID: \"1b0e61a0-72dd-4edd-8217-c7b157e2c38c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448661 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwwsr\" (UniqueName: \"kubernetes.io/projected/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-kube-api-access-pwwsr\") pod \"marketplace-operator-79b997595-bzsxn\" (UID: \"c4f753a1-ecf0-4b2c-9121-989677c6b2a6\") " pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448678 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns5sc\" (UniqueName: \"kubernetes.io/projected/6d58ee7c-c176-4ddd-af48-d9406f4eac74-kube-api-access-ns5sc\") pod \"migrator-59844c95c7-kgv82\" (UID: \"6d58ee7c-c176-4ddd-af48-d9406f4eac74\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kgv82" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448692 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86a554b4-30b1-4521-8677-d1974308a379-cert\") pod \"ingress-canary-kb6j9\" (UID: \"86a554b4-30b1-4521-8677-d1974308a379\") " pod="openshift-ingress-canary/ingress-canary-kb6j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448732 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/966b8965-4dbb-4735-9564-eac0652fa990-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rx6hm\" (UID: \"966b8965-4dbb-4735-9564-eac0652fa990\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448762 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bzsxn\" (UID: \"c4f753a1-ecf0-4b2c-9121-989677c6b2a6\") " pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448777 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwrq2\" (UniqueName: \"kubernetes.io/projected/966b8965-4dbb-4735-9564-eac0652fa990-kube-api-access-cwrq2\") pod \"kube-storage-version-migrator-operator-b67b599dd-rx6hm\" (UID: \"966b8965-4dbb-4735-9564-eac0652fa990\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448795 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q996c\" (UniqueName: \"kubernetes.io/projected/90441cdf-d9ad-48d8-a400-9c770bc81a60-kube-api-access-q996c\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448809 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b76f3bc4-4824-422b-a14a-e7cd193ed30d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448823 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1b0e61a0-72dd-4edd-8217-c7b157e2c38c-tmpfs\") pod \"packageserver-d55dfcdfc-n6n4t\" (UID: \"1b0e61a0-72dd-4edd-8217-c7b157e2c38c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448841 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zn7j9\" (UID: \"9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448857 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9f265e28-d9d2-43db-b43b-8f7d778b2fa5-signing-cabundle\") pod \"service-ca-9c57cc56f-hv9fc\" (UID: \"9f265e28-d9d2-43db-b43b-8f7d778b2fa5\") " pod="openshift-service-ca/service-ca-9c57cc56f-hv9fc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448923 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/29629b99-9606-4830-9623-8c81cecbd0a9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wv68j\" (UID: \"29629b99-9606-4830-9623-8c81cecbd0a9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448940 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b7ceecfd-f2a9-4c82-85de-e32eb001eb2b-srv-cert\") pod \"olm-operator-6b444d44fb-z8q7b\" (UID: \"b7ceecfd-f2a9-4c82-85de-e32eb001eb2b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448957 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdnwc\" (UniqueName: \"kubernetes.io/projected/9f265e28-d9d2-43db-b43b-8f7d778b2fa5-kube-api-access-wdnwc\") pod \"service-ca-9c57cc56f-hv9fc\" (UID: \"9f265e28-d9d2-43db-b43b-8f7d778b2fa5\") " pod="openshift-service-ca/service-ca-9c57cc56f-hv9fc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448972 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1b0e61a0-72dd-4edd-8217-c7b157e2c38c-webhook-cert\") pod \"packageserver-d55dfcdfc-n6n4t\" (UID: \"1b0e61a0-72dd-4edd-8217-c7b157e2c38c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.449005 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/90441cdf-d9ad-48d8-a400-9c770bc81a60-socket-dir\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.449022 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2slks\" (UniqueName: \"kubernetes.io/projected/b7ceecfd-f2a9-4c82-85de-e32eb001eb2b-kube-api-access-2slks\") pod \"olm-operator-6b444d44fb-z8q7b\" (UID: \"b7ceecfd-f2a9-4c82-85de-e32eb001eb2b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.449059 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-registry-tls\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.449078 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b76f3bc4-4824-422b-a14a-e7cd193ed30d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450068 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8-config\") pod \"kube-apiserver-operator-766d6c64bb-zn7j9\" (UID: \"9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450260 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/14030278-3de4-4425-8308-813d4f7c0a2d-certs\") pod \"machine-config-server-m2mqz\" (UID: \"14030278-3de4-4425-8308-813d4f7c0a2d\") " pod="openshift-machine-config-operator/machine-config-server-m2mqz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450285 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/90441cdf-d9ad-48d8-a400-9c770bc81a60-csi-data-dir\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450340 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjbqr\" (UniqueName: \"kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-kube-api-access-tjbqr\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450374 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4snk\" (UniqueName: \"kubernetes.io/projected/bc8e3a2f-b630-40bf-865e-c7a035385730-kube-api-access-z4snk\") pod \"service-ca-operator-777779d784-n42rc\" (UID: \"bc8e3a2f-b630-40bf-865e-c7a035385730\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n42rc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450408 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e95addab-99c5-499c-92bc-f13fd4870710-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-n9v5x\" (UID: \"e95addab-99c5-499c-92bc-f13fd4870710\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n9v5x" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450429 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8cad1e4-b070-477e-a20a-5cf8cb397e85-auth-proxy-config\") pod \"machine-config-operator-74547568cd-w66ps\" (UID: \"a8cad1e4-b070-477e-a20a-5cf8cb397e85\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450569 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b7ceecfd-f2a9-4c82-85de-e32eb001eb2b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-z8q7b\" (UID: \"b7ceecfd-f2a9-4c82-85de-e32eb001eb2b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450592 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bzsxn\" (UID: \"c4f753a1-ecf0-4b2c-9121-989677c6b2a6\") " pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450631 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-bound-sa-token\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450649 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/14030278-3de4-4425-8308-813d4f7c0a2d-node-bootstrap-token\") pod \"machine-config-server-m2mqz\" (UID: \"14030278-3de4-4425-8308-813d4f7c0a2d\") " pod="openshift-machine-config-operator/machine-config-server-m2mqz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450665 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/90441cdf-d9ad-48d8-a400-9c770bc81a60-registration-dir\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450730 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450747 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz7xq\" (UniqueName: \"kubernetes.io/projected/14030278-3de4-4425-8308-813d4f7c0a2d-kube-api-access-tz7xq\") pod \"machine-config-server-m2mqz\" (UID: \"14030278-3de4-4425-8308-813d4f7c0a2d\") " pod="openshift-machine-config-operator/machine-config-server-m2mqz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450777 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrmkg\" (UniqueName: \"kubernetes.io/projected/e95addab-99c5-499c-92bc-f13fd4870710-kube-api-access-qrmkg\") pod \"cluster-samples-operator-665b6dd947-n9v5x\" (UID: \"e95addab-99c5-499c-92bc-f13fd4870710\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n9v5x" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450811 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9f265e28-d9d2-43db-b43b-8f7d778b2fa5-signing-key\") pod \"service-ca-9c57cc56f-hv9fc\" (UID: \"9f265e28-d9d2-43db-b43b-8f7d778b2fa5\") " pod="openshift-service-ca/service-ca-9c57cc56f-hv9fc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450840 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a8cad1e4-b070-477e-a20a-5cf8cb397e85-images\") pod \"machine-config-operator-74547568cd-w66ps\" (UID: \"a8cad1e4-b070-477e-a20a-5cf8cb397e85\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450855 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf9mp\" (UniqueName: \"kubernetes.io/projected/86a554b4-30b1-4521-8677-d1974308a379-kube-api-access-cf9mp\") pod \"ingress-canary-kb6j9\" (UID: \"86a554b4-30b1-4521-8677-d1974308a379\") " pod="openshift-ingress-canary/ingress-canary-kb6j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450912 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9f265e28-d9d2-43db-b43b-8f7d778b2fa5-signing-cabundle\") pod \"service-ca-9c57cc56f-hv9fc\" (UID: \"9f265e28-d9d2-43db-b43b-8f7d778b2fa5\") " pod="openshift-service-ca/service-ca-9c57cc56f-hv9fc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450967 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/23594203-b17a-4d98-95da-a7c0e3a2ef4e-stats-auth\") pod \"router-default-5444994796-j7bfz\" (UID: \"23594203-b17a-4d98-95da-a7c0e3a2ef4e\") " pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450988 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/99922ba3-dd03-4c94-9663-9c530f7b3ad0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gnmkq\" (UID: \"99922ba3-dd03-4c94-9663-9c530f7b3ad0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnmkq" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.451008 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msm6c\" (UniqueName: \"kubernetes.io/projected/1b0e61a0-72dd-4edd-8217-c7b157e2c38c-kube-api-access-msm6c\") pod \"packageserver-d55dfcdfc-n6n4t\" (UID: \"1b0e61a0-72dd-4edd-8217-c7b157e2c38c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.451025 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b76f3bc4-4824-422b-a14a-e7cd193ed30d-registry-certificates\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.451064 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc8e3a2f-b630-40bf-865e-c7a035385730-serving-cert\") pod \"service-ca-operator-777779d784-n42rc\" (UID: \"bc8e3a2f-b630-40bf-865e-c7a035385730\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n42rc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.451100 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3c976fbc-6a91-494d-8d9e-1abe8119acf9-metrics-tls\") pod \"dns-default-z2sjd\" (UID: \"3c976fbc-6a91-494d-8d9e-1abe8119acf9\") " pod="openshift-dns/dns-default-z2sjd" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.451118 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58h4c\" (UniqueName: \"kubernetes.io/projected/99922ba3-dd03-4c94-9663-9c530f7b3ad0-kube-api-access-58h4c\") pod \"control-plane-machine-set-operator-78cbb6b69f-gnmkq\" (UID: \"99922ba3-dd03-4c94-9663-9c530f7b3ad0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnmkq" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.451135 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a8cad1e4-b070-477e-a20a-5cf8cb397e85-proxy-tls\") pod \"machine-config-operator-74547568cd-w66ps\" (UID: \"a8cad1e4-b070-477e-a20a-5cf8cb397e85\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" Feb 02 06:48:42 crc kubenswrapper[4842]: E0202 06:48:42.451161 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:42.951150693 +0000 UTC m=+148.328418595 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.455526 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc8e3a2f-b630-40bf-865e-c7a035385730-config\") pod \"service-ca-operator-777779d784-n42rc\" (UID: \"bc8e3a2f-b630-40bf-865e-c7a035385730\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n42rc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.455552 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bzsxn\" (UID: \"c4f753a1-ecf0-4b2c-9121-989677c6b2a6\") " pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.456801 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b76f3bc4-4824-422b-a14a-e7cd193ed30d-trusted-ca\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.457525 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23594203-b17a-4d98-95da-a7c0e3a2ef4e-service-ca-bundle\") pod \"router-default-5444994796-j7bfz\" (UID: \"23594203-b17a-4d98-95da-a7c0e3a2ef4e\") " pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.457803 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b76f3bc4-4824-422b-a14a-e7cd193ed30d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.458059 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/29629b99-9606-4830-9623-8c81cecbd0a9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wv68j\" (UID: \"29629b99-9606-4830-9623-8c81cecbd0a9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.459067 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b76f3bc4-4824-422b-a14a-e7cd193ed30d-registry-certificates\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.459166 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e95addab-99c5-499c-92bc-f13fd4870710-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-n9v5x\" (UID: \"e95addab-99c5-499c-92bc-f13fd4870710\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n9v5x" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.459999 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc8e3a2f-b630-40bf-865e-c7a035385730-serving-cert\") pod \"service-ca-operator-777779d784-n42rc\" (UID: \"bc8e3a2f-b630-40bf-865e-c7a035385730\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n42rc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.460095 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b76f3bc4-4824-422b-a14a-e7cd193ed30d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: W0202 06:48:42.460508 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podceaf90b2_229c_4452_8a1b_fd016682bf6e.slice/crio-13efea9185082b7d981af116b6c37c2792ed02efaff8abdff2ee0e301c453f7a WatchSource:0}: Error finding container 13efea9185082b7d981af116b6c37c2792ed02efaff8abdff2ee0e301c453f7a: Status 404 returned error can't find the container with id 13efea9185082b7d981af116b6c37c2792ed02efaff8abdff2ee0e301c453f7a Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.462775 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9f265e28-d9d2-43db-b43b-8f7d778b2fa5-signing-key\") pod \"service-ca-9c57cc56f-hv9fc\" (UID: \"9f265e28-d9d2-43db-b43b-8f7d778b2fa5\") " pod="openshift-service-ca/service-ca-9c57cc56f-hv9fc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.463425 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-registry-tls\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.463774 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23594203-b17a-4d98-95da-a7c0e3a2ef4e-metrics-certs\") pod \"router-default-5444994796-j7bfz\" (UID: \"23594203-b17a-4d98-95da-a7c0e3a2ef4e\") " pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.464447 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zn7j9\" (UID: \"9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.465099 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/23594203-b17a-4d98-95da-a7c0e3a2ef4e-stats-auth\") pod \"router-default-5444994796-j7bfz\" (UID: \"23594203-b17a-4d98-95da-a7c0e3a2ef4e\") " pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.466254 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bzsxn\" (UID: \"c4f753a1-ecf0-4b2c-9121-989677c6b2a6\") " pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.467206 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/23594203-b17a-4d98-95da-a7c0e3a2ef4e-default-certificate\") pod \"router-default-5444994796-j7bfz\" (UID: \"23594203-b17a-4d98-95da-a7c0e3a2ef4e\") " pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.480846 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/99922ba3-dd03-4c94-9663-9c530f7b3ad0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gnmkq\" (UID: \"99922ba3-dd03-4c94-9663-9c530f7b3ad0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnmkq" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.495077 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.509069 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gtlm\" (UniqueName: \"kubernetes.io/projected/23594203-b17a-4d98-95da-a7c0e3a2ef4e-kube-api-access-7gtlm\") pod \"router-default-5444994796-j7bfz\" (UID: \"23594203-b17a-4d98-95da-a7c0e3a2ef4e\") " pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.521327 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krdtw\" (UniqueName: \"kubernetes.io/projected/29629b99-9606-4830-9623-8c81cecbd0a9-kube-api-access-krdtw\") pod \"package-server-manager-789f6589d5-wv68j\" (UID: \"29629b99-9606-4830-9623-8c81cecbd0a9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.538629 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-h6pjl" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.544804 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwwsr\" (UniqueName: \"kubernetes.io/projected/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-kube-api-access-pwwsr\") pod \"marketplace-operator-79b997595-bzsxn\" (UID: \"c4f753a1-ecf0-4b2c-9121-989677c6b2a6\") " pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.549446 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.551544 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:42 crc kubenswrapper[4842]: E0202 06:48:42.551750 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:43.051718044 +0000 UTC m=+148.428985946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.551789 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.551824 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz7xq\" (UniqueName: \"kubernetes.io/projected/14030278-3de4-4425-8308-813d4f7c0a2d-kube-api-access-tz7xq\") pod \"machine-config-server-m2mqz\" (UID: \"14030278-3de4-4425-8308-813d4f7c0a2d\") " pod="openshift-machine-config-operator/machine-config-server-m2mqz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.551856 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a8cad1e4-b070-477e-a20a-5cf8cb397e85-images\") pod \"machine-config-operator-74547568cd-w66ps\" (UID: \"a8cad1e4-b070-477e-a20a-5cf8cb397e85\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.551885 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf9mp\" (UniqueName: \"kubernetes.io/projected/86a554b4-30b1-4521-8677-d1974308a379-kube-api-access-cf9mp\") pod \"ingress-canary-kb6j9\" (UID: \"86a554b4-30b1-4521-8677-d1974308a379\") " pod="openshift-ingress-canary/ingress-canary-kb6j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.551907 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msm6c\" (UniqueName: \"kubernetes.io/projected/1b0e61a0-72dd-4edd-8217-c7b157e2c38c-kube-api-access-msm6c\") pod \"packageserver-d55dfcdfc-n6n4t\" (UID: \"1b0e61a0-72dd-4edd-8217-c7b157e2c38c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.551930 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3c976fbc-6a91-494d-8d9e-1abe8119acf9-metrics-tls\") pod \"dns-default-z2sjd\" (UID: \"3c976fbc-6a91-494d-8d9e-1abe8119acf9\") " pod="openshift-dns/dns-default-z2sjd" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.551955 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a8cad1e4-b070-477e-a20a-5cf8cb397e85-proxy-tls\") pod \"machine-config-operator-74547568cd-w66ps\" (UID: \"a8cad1e4-b070-477e-a20a-5cf8cb397e85\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.551976 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c976fbc-6a91-494d-8d9e-1abe8119acf9-config-volume\") pod \"dns-default-z2sjd\" (UID: \"3c976fbc-6a91-494d-8d9e-1abe8119acf9\") " pod="openshift-dns/dns-default-z2sjd" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.551992 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tm72\" (UniqueName: \"kubernetes.io/projected/a8cad1e4-b070-477e-a20a-5cf8cb397e85-kube-api-access-6tm72\") pod \"machine-config-operator-74547568cd-w66ps\" (UID: \"a8cad1e4-b070-477e-a20a-5cf8cb397e85\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.552010 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbqhp\" (UniqueName: \"kubernetes.io/projected/3c976fbc-6a91-494d-8d9e-1abe8119acf9-kube-api-access-pbqhp\") pod \"dns-default-z2sjd\" (UID: \"3c976fbc-6a91-494d-8d9e-1abe8119acf9\") " pod="openshift-dns/dns-default-z2sjd" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.552037 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/966b8965-4dbb-4735-9564-eac0652fa990-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rx6hm\" (UID: \"966b8965-4dbb-4735-9564-eac0652fa990\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.552061 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/90441cdf-d9ad-48d8-a400-9c770bc81a60-plugins-dir\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.552881 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/90441cdf-d9ad-48d8-a400-9c770bc81a60-mountpoint-dir\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.552924 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1b0e61a0-72dd-4edd-8217-c7b157e2c38c-apiservice-cert\") pod \"packageserver-d55dfcdfc-n6n4t\" (UID: \"1b0e61a0-72dd-4edd-8217-c7b157e2c38c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.552946 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86a554b4-30b1-4521-8677-d1974308a379-cert\") pod \"ingress-canary-kb6j9\" (UID: \"86a554b4-30b1-4521-8677-d1974308a379\") " pod="openshift-ingress-canary/ingress-canary-kb6j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.552966 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns5sc\" (UniqueName: \"kubernetes.io/projected/6d58ee7c-c176-4ddd-af48-d9406f4eac74-kube-api-access-ns5sc\") pod \"migrator-59844c95c7-kgv82\" (UID: \"6d58ee7c-c176-4ddd-af48-d9406f4eac74\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kgv82" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.553025 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/966b8965-4dbb-4735-9564-eac0652fa990-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rx6hm\" (UID: \"966b8965-4dbb-4735-9564-eac0652fa990\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.553045 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwrq2\" (UniqueName: \"kubernetes.io/projected/966b8965-4dbb-4735-9564-eac0652fa990-kube-api-access-cwrq2\") pod \"kube-storage-version-migrator-operator-b67b599dd-rx6hm\" (UID: \"966b8965-4dbb-4735-9564-eac0652fa990\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.553081 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q996c\" (UniqueName: \"kubernetes.io/projected/90441cdf-d9ad-48d8-a400-9c770bc81a60-kube-api-access-q996c\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.553100 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1b0e61a0-72dd-4edd-8217-c7b157e2c38c-tmpfs\") pod \"packageserver-d55dfcdfc-n6n4t\" (UID: \"1b0e61a0-72dd-4edd-8217-c7b157e2c38c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.553126 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b7ceecfd-f2a9-4c82-85de-e32eb001eb2b-srv-cert\") pod \"olm-operator-6b444d44fb-z8q7b\" (UID: \"b7ceecfd-f2a9-4c82-85de-e32eb001eb2b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.553171 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1b0e61a0-72dd-4edd-8217-c7b157e2c38c-webhook-cert\") pod \"packageserver-d55dfcdfc-n6n4t\" (UID: \"1b0e61a0-72dd-4edd-8217-c7b157e2c38c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.553193 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/90441cdf-d9ad-48d8-a400-9c770bc81a60-socket-dir\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.553211 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2slks\" (UniqueName: \"kubernetes.io/projected/b7ceecfd-f2a9-4c82-85de-e32eb001eb2b-kube-api-access-2slks\") pod \"olm-operator-6b444d44fb-z8q7b\" (UID: \"b7ceecfd-f2a9-4c82-85de-e32eb001eb2b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.553255 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/14030278-3de4-4425-8308-813d4f7c0a2d-certs\") pod \"machine-config-server-m2mqz\" (UID: \"14030278-3de4-4425-8308-813d4f7c0a2d\") " pod="openshift-machine-config-operator/machine-config-server-m2mqz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.553269 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/90441cdf-d9ad-48d8-a400-9c770bc81a60-csi-data-dir\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.553324 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8cad1e4-b070-477e-a20a-5cf8cb397e85-auth-proxy-config\") pod \"machine-config-operator-74547568cd-w66ps\" (UID: \"a8cad1e4-b070-477e-a20a-5cf8cb397e85\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.553347 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b7ceecfd-f2a9-4c82-85de-e32eb001eb2b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-z8q7b\" (UID: \"b7ceecfd-f2a9-4c82-85de-e32eb001eb2b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.553365 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/90441cdf-d9ad-48d8-a400-9c770bc81a60-registration-dir\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.553408 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/14030278-3de4-4425-8308-813d4f7c0a2d-node-bootstrap-token\") pod \"machine-config-server-m2mqz\" (UID: \"14030278-3de4-4425-8308-813d4f7c0a2d\") " pod="openshift-machine-config-operator/machine-config-server-m2mqz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.553527 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c976fbc-6a91-494d-8d9e-1abe8119acf9-config-volume\") pod \"dns-default-z2sjd\" (UID: \"3c976fbc-6a91-494d-8d9e-1abe8119acf9\") " pod="openshift-dns/dns-default-z2sjd" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.553860 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a8cad1e4-b070-477e-a20a-5cf8cb397e85-images\") pod \"machine-config-operator-74547568cd-w66ps\" (UID: \"a8cad1e4-b070-477e-a20a-5cf8cb397e85\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" Feb 02 06:48:42 crc kubenswrapper[4842]: E0202 06:48:42.554634 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:43.054608114 +0000 UTC m=+148.431876026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.554938 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1b0e61a0-72dd-4edd-8217-c7b157e2c38c-tmpfs\") pod \"packageserver-d55dfcdfc-n6n4t\" (UID: \"1b0e61a0-72dd-4edd-8217-c7b157e2c38c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.555072 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3c976fbc-6a91-494d-8d9e-1abe8119acf9-metrics-tls\") pod \"dns-default-z2sjd\" (UID: \"3c976fbc-6a91-494d-8d9e-1abe8119acf9\") " pod="openshift-dns/dns-default-z2sjd" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.555714 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/966b8965-4dbb-4735-9564-eac0652fa990-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rx6hm\" (UID: \"966b8965-4dbb-4735-9564-eac0652fa990\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.556534 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8cad1e4-b070-477e-a20a-5cf8cb397e85-auth-proxy-config\") pod \"machine-config-operator-74547568cd-w66ps\" (UID: \"a8cad1e4-b070-477e-a20a-5cf8cb397e85\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.556603 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/90441cdf-d9ad-48d8-a400-9c770bc81a60-csi-data-dir\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.557628 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1b0e61a0-72dd-4edd-8217-c7b157e2c38c-apiservice-cert\") pod \"packageserver-d55dfcdfc-n6n4t\" (UID: \"1b0e61a0-72dd-4edd-8217-c7b157e2c38c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.557685 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/90441cdf-d9ad-48d8-a400-9c770bc81a60-plugins-dir\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.557714 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/90441cdf-d9ad-48d8-a400-9c770bc81a60-mountpoint-dir\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.557759 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/90441cdf-d9ad-48d8-a400-9c770bc81a60-socket-dir\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.557898 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/90441cdf-d9ad-48d8-a400-9c770bc81a60-registration-dir\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.558720 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a8cad1e4-b070-477e-a20a-5cf8cb397e85-proxy-tls\") pod \"machine-config-operator-74547568cd-w66ps\" (UID: \"a8cad1e4-b070-477e-a20a-5cf8cb397e85\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.564011 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/14030278-3de4-4425-8308-813d4f7c0a2d-node-bootstrap-token\") pod \"machine-config-server-m2mqz\" (UID: \"14030278-3de4-4425-8308-813d4f7c0a2d\") " pod="openshift-machine-config-operator/machine-config-server-m2mqz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.564758 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/14030278-3de4-4425-8308-813d4f7c0a2d-certs\") pod \"machine-config-server-m2mqz\" (UID: \"14030278-3de4-4425-8308-813d4f7c0a2d\") " pod="openshift-machine-config-operator/machine-config-server-m2mqz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.572813 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/966b8965-4dbb-4735-9564-eac0652fa990-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rx6hm\" (UID: \"966b8965-4dbb-4735-9564-eac0652fa990\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.572903 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-bound-sa-token\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.572921 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b7ceecfd-f2a9-4c82-85de-e32eb001eb2b-srv-cert\") pod \"olm-operator-6b444d44fb-z8q7b\" (UID: \"b7ceecfd-f2a9-4c82-85de-e32eb001eb2b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.573162 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86a554b4-30b1-4521-8677-d1974308a379-cert\") pod \"ingress-canary-kb6j9\" (UID: \"86a554b4-30b1-4521-8677-d1974308a379\") " pod="openshift-ingress-canary/ingress-canary-kb6j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.573390 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1b0e61a0-72dd-4edd-8217-c7b157e2c38c-webhook-cert\") pod \"packageserver-d55dfcdfc-n6n4t\" (UID: \"1b0e61a0-72dd-4edd-8217-c7b157e2c38c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.573412 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b7ceecfd-f2a9-4c82-85de-e32eb001eb2b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-z8q7b\" (UID: \"b7ceecfd-f2a9-4c82-85de-e32eb001eb2b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.587206 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrmkg\" (UniqueName: \"kubernetes.io/projected/e95addab-99c5-499c-92bc-f13fd4870710-kube-api-access-qrmkg\") pod \"cluster-samples-operator-665b6dd947-n9v5x\" (UID: \"e95addab-99c5-499c-92bc-f13fd4870710\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n9v5x" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.598129 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4snk\" (UniqueName: \"kubernetes.io/projected/bc8e3a2f-b630-40bf-865e-c7a035385730-kube-api-access-z4snk\") pod \"service-ca-operator-777779d784-n42rc\" (UID: \"bc8e3a2f-b630-40bf-865e-c7a035385730\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n42rc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.619650 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lh2qm"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.624442 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zn7j9\" (UID: \"9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.641850 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n9v5x" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.655876 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:42 crc kubenswrapper[4842]: E0202 06:48:42.656386 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:43.156366793 +0000 UTC m=+148.533634705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.663126 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdnwc\" (UniqueName: \"kubernetes.io/projected/9f265e28-d9d2-43db-b43b-8f7d778b2fa5-kube-api-access-wdnwc\") pod \"service-ca-9c57cc56f-hv9fc\" (UID: \"9f265e28-d9d2-43db-b43b-8f7d778b2fa5\") " pod="openshift-service-ca/service-ca-9c57cc56f-hv9fc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.667068 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58h4c\" (UniqueName: \"kubernetes.io/projected/99922ba3-dd03-4c94-9663-9c530f7b3ad0-kube-api-access-58h4c\") pod \"control-plane-machine-set-operator-78cbb6b69f-gnmkq\" (UID: \"99922ba3-dd03-4c94-9663-9c530f7b3ad0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnmkq" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.685958 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjbqr\" (UniqueName: \"kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-kube-api-access-tjbqr\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.721476 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.725479 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tm72\" (UniqueName: \"kubernetes.io/projected/a8cad1e4-b070-477e-a20a-5cf8cb397e85-kube-api-access-6tm72\") pod \"machine-config-operator-74547568cd-w66ps\" (UID: \"a8cad1e4-b070-477e-a20a-5cf8cb397e85\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.732615 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.752116 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbqhp\" (UniqueName: \"kubernetes.io/projected/3c976fbc-6a91-494d-8d9e-1abe8119acf9-kube-api-access-pbqhp\") pod \"dns-default-z2sjd\" (UID: \"3c976fbc-6a91-494d-8d9e-1abe8119acf9\") " pod="openshift-dns/dns-default-z2sjd" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.757484 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: E0202 06:48:42.757805 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:43.257791954 +0000 UTC m=+148.635059866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.766602 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf9mp\" (UniqueName: \"kubernetes.io/projected/86a554b4-30b1-4521-8677-d1974308a379-kube-api-access-cf9mp\") pod \"ingress-canary-kb6j9\" (UID: \"86a554b4-30b1-4521-8677-d1974308a379\") " pod="openshift-ingress-canary/ingress-canary-kb6j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.768037 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-kmw8f"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.788875 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwrq2\" (UniqueName: \"kubernetes.io/projected/966b8965-4dbb-4735-9564-eac0652fa990-kube-api-access-cwrq2\") pod \"kube-storage-version-migrator-operator-b67b599dd-rx6hm\" (UID: \"966b8965-4dbb-4735-9564-eac0652fa990\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.791661 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hv9fc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.801053 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msm6c\" (UniqueName: \"kubernetes.io/projected/1b0e61a0-72dd-4edd-8217-c7b157e2c38c-kube-api-access-msm6c\") pod \"packageserver-d55dfcdfc-n6n4t\" (UID: \"1b0e61a0-72dd-4edd-8217-c7b157e2c38c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.803988 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.811076 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.822829 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnmkq" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.830850 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz7xq\" (UniqueName: \"kubernetes.io/projected/14030278-3de4-4425-8308-813d4f7c0a2d-kube-api-access-tz7xq\") pod \"machine-config-server-m2mqz\" (UID: \"14030278-3de4-4425-8308-813d4f7c0a2d\") " pod="openshift-machine-config-operator/machine-config-server-m2mqz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.840151 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2slks\" (UniqueName: \"kubernetes.io/projected/b7ceecfd-f2a9-4c82-85de-e32eb001eb2b-kube-api-access-2slks\") pod \"olm-operator-6b444d44fb-z8q7b\" (UID: \"b7ceecfd-f2a9-4c82-85de-e32eb001eb2b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.851430 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n42rc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.861004 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:42 crc kubenswrapper[4842]: E0202 06:48:42.861324 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:43.361304676 +0000 UTC m=+148.738572588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.870996 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q996c\" (UniqueName: \"kubernetes.io/projected/90441cdf-d9ad-48d8-a400-9c770bc81a60-kube-api-access-q996c\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.878178 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.878977 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.884277 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.886336 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5wqx2"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.888778 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns5sc\" (UniqueName: \"kubernetes.io/projected/6d58ee7c-c176-4ddd-af48-d9406f4eac74-kube-api-access-ns5sc\") pod \"migrator-59844c95c7-kgv82\" (UID: \"6d58ee7c-c176-4ddd-af48-d9406f4eac74\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kgv82" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.890119 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.898490 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.906569 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-m2mqz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.913707 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.935940 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.946254 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kb6j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.949420 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-z2sjd" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.962040 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: E0202 06:48:42.962558 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:43.462544832 +0000 UTC m=+148.839812744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:42 crc kubenswrapper[4842]: W0202 06:48:42.977913 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b43b464_5623_46bb_8097_65b505d08960.slice/crio-5d47aec119b9bfe1604e8d488d64ba28c81374dd8415db475287c6760b603f34 WatchSource:0}: Error finding container 5d47aec119b9bfe1604e8d488d64ba28c81374dd8415db475287c6760b603f34: Status 404 returned error can't find the container with id 5d47aec119b9bfe1604e8d488d64ba28c81374dd8415db475287c6760b603f34 Feb 02 06:48:42 crc kubenswrapper[4842]: W0202 06:48:42.982063 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf3383aa_e821_4389_b2f0_cc697ad4cc7a.slice/crio-3cddd3a52aafeec12f90233681b01486a47adbd0a6f4f02a873d81e9ec7c6cda WatchSource:0}: Error finding container 3cddd3a52aafeec12f90233681b01486a47adbd0a6f4f02a873d81e9ec7c6cda: Status 404 returned error can't find the container with id 3cddd3a52aafeec12f90233681b01486a47adbd0a6f4f02a873d81e9ec7c6cda Feb 02 06:48:42 crc kubenswrapper[4842]: W0202 06:48:42.984786 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42ff05d2_dda3_411f_bcee_816f87ce21b8.slice/crio-ac5d8c61f13048d2a60d58a9dde843ea4257a79e5de57b1cf689ae0265f1aa85 WatchSource:0}: Error finding container ac5d8c61f13048d2a60d58a9dde843ea4257a79e5de57b1cf689ae0265f1aa85: Status 404 returned error can't find the container with id ac5d8c61f13048d2a60d58a9dde843ea4257a79e5de57b1cf689ae0265f1aa85 Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.063721 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:43 crc kubenswrapper[4842]: E0202 06:48:43.064252 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:43.56423308 +0000 UTC m=+148.941500992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.066566 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n9v5x"] Feb 02 06:48:43 crc kubenswrapper[4842]: W0202 06:48:43.100754 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23594203_b17a_4d98_95da_a7c0e3a2ef4e.slice/crio-7c436f2068159849a1430e912428ed6855ffb1465ddb0bf1ae175ec4fa9e6eee WatchSource:0}: Error finding container 7c436f2068159849a1430e912428ed6855ffb1465ddb0bf1ae175ec4fa9e6eee: Status 404 returned error can't find the container with id 7c436f2068159849a1430e912428ed6855ffb1465ddb0bf1ae175ec4fa9e6eee Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.145502 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-pbtq6"] Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.151350 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4"] Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.152851 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5"] Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.165372 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:43 crc kubenswrapper[4842]: E0202 06:48:43.165665 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:43.665651781 +0000 UTC m=+149.042919693 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.173359 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kgv82" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.225715 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-h6pjl"] Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.266988 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:43 crc kubenswrapper[4842]: E0202 06:48:43.267952 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:43.767933043 +0000 UTC m=+149.145200945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:43 crc kubenswrapper[4842]: W0202 06:48:43.277168 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57b85eac_df63_4c81_abe6_3dba293df9c2.slice/crio-3ab9a1e80b2e9f9bd86826c0a8c923659eb16b3b9cb43a3fe7c8fc4c09f48521 WatchSource:0}: Error finding container 3ab9a1e80b2e9f9bd86826c0a8c923659eb16b3b9cb43a3fe7c8fc4c09f48521: Status 404 returned error can't find the container with id 3ab9a1e80b2e9f9bd86826c0a8c923659eb16b3b9cb43a3fe7c8fc4c09f48521 Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.377553 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.378436 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.378498 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.378540 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:43 crc kubenswrapper[4842]: E0202 06:48:43.378930 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:43.878917466 +0000 UTC m=+149.256185378 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.384603 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.398271 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.399954 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.407476 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j" event={"ID":"42ff05d2-dda3-411f-bcee-816f87ce21b8","Type":"ContainerStarted","Data":"ac5d8c61f13048d2a60d58a9dde843ea4257a79e5de57b1cf689ae0265f1aa85"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.411559 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-m2mqz" event={"ID":"14030278-3de4-4425-8308-813d4f7c0a2d","Type":"ContainerStarted","Data":"191b477ad776b94a2600969b4929206555e02f392f64c625a9e5dd238356e0ee"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.420123 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" event={"ID":"3a1b2909-d542-48b0-8729-294f7950ab2d","Type":"ContainerStarted","Data":"64198cd4ed9c3f648a83a0d5cc2017b0e62648734deb3f42088a21d4a035b132"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.420184 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" event={"ID":"3a1b2909-d542-48b0-8729-294f7950ab2d","Type":"ContainerStarted","Data":"643cd1b7543d0a40a6f2280aca5f3b03741bd2063f49a6310b7a1671fc67d3cc"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.421236 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.447998 4842 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-brh4m container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.448075 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" podUID="3a1b2909-d542-48b0-8729-294f7950ab2d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.478052 4842 generic.go:334] "Generic (PLEG): container finished" podID="10f8b640-1372-484f-b42f-97e336fb2992" containerID="33308fffc29e09c2809c8296fe5ed110a7c17807a90952ba788c9e21c7133299" exitCode=0 Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.476958 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" event={"ID":"aa1b5822-c8a6-4fdb-b42f-8a94469a65ef","Type":"ContainerStarted","Data":"70f7df960c8c15dc99df889941b319c6bdc1ecff906022dec5bf662487f58a4c"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.478990 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" event={"ID":"aa1b5822-c8a6-4fdb-b42f-8a94469a65ef","Type":"ContainerStarted","Data":"188f3a400d52af94f196b7bfd0f212fbf10bc7c314e43ab85fab6d2ed1708e8f"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.479187 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" event={"ID":"10f8b640-1372-484f-b42f-97e336fb2992","Type":"ContainerDied","Data":"33308fffc29e09c2809c8296fe5ed110a7c17807a90952ba788c9e21c7133299"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.480376 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.480634 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:43 crc kubenswrapper[4842]: E0202 06:48:43.482513 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:43.982483949 +0000 UTC m=+149.359751861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.495929 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.496770 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.500326 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.546735 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4rp8p" event={"ID":"e08cb720-1a1d-47c3-a787-c61d377bf2dd","Type":"ContainerStarted","Data":"bdb1e584a03832c94aa5f1bf36e11d0a2a871b030797a8652337af4f9beecb08"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.546814 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-4rp8p" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.564135 4842 patch_prober.go:28] interesting pod/console-operator-58897d9998-4rp8p container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.564695 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-4rp8p" podUID="e08cb720-1a1d-47c3-a787-c61d377bf2dd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.583260 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:43 crc kubenswrapper[4842]: E0202 06:48:43.585119 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:44.085102689 +0000 UTC m=+149.462370601 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.623402 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" event={"ID":"e4367135-ecb4-447d-a89e-5dcbeffe345e","Type":"ContainerStarted","Data":"4a72f24d6a3cbccd529641d399febb8d89d65c4272b29b63f17cc77940c63603"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.668513 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n9v5x" event={"ID":"e95addab-99c5-499c-92bc-f13fd4870710","Type":"ContainerStarted","Data":"b3003661d21f7ddeaa342d70fec0f1a595d7db0dd41d7d3b64338bb52034151e"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.711356 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.713196 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-h6pjl" event={"ID":"27bce4a1-799c-4d40-900c-455eaba28398","Type":"ContainerStarted","Data":"09e95bced85da80bd8ffd68f3301db9615973f0b26cbc28b817abe57671274ad"} Feb 02 06:48:43 crc kubenswrapper[4842]: E0202 06:48:43.713660 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:44.213637718 +0000 UTC m=+149.590905630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.718785 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" event={"ID":"57b85eac-df63-4c81-abe6-3dba293df9c2","Type":"ContainerStarted","Data":"3ab9a1e80b2e9f9bd86826c0a8c923659eb16b3b9cb43a3fe7c8fc4c09f48521"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.733176 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" event={"ID":"74549f13-263e-4e4f-8331-9f7fd6bf36b3","Type":"ContainerStarted","Data":"2b06ca9643e6dd66ea229dc73db41bbef76bafb5e58300e8bc881d1a7b0842f2"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.738964 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5wqx2" event={"ID":"bf3383aa-e821-4389-b2f0-cc697ad4cc7a","Type":"ContainerStarted","Data":"3cddd3a52aafeec12f90233681b01486a47adbd0a6f4f02a873d81e9ec7c6cda"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.746001 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnmkq"] Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.751747 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9"] Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.752134 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" event={"ID":"091908d5-acab-418a-a5f2-fa909294222a","Type":"ContainerStarted","Data":"65dc9362c6b26f739995b4de9917da7cb58d0cae90f7b95923ceefe53ac9c22f"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.752176 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" event={"ID":"091908d5-acab-418a-a5f2-fa909294222a","Type":"ContainerStarted","Data":"9ed29c80f17cb758d8b4ef130b3adcd8c80632dbec158c183db1e837dd9a47dc"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.755264 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.761144 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.770087 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" event={"ID":"fd96d668-a9b2-474f-8617-17eca5f01191","Type":"ContainerStarted","Data":"bc0f60c5880e048d9b8d09aa27d50fdf78cd9c8eef2084028b57c06b7e7231e8"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.772437 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kmw8f" event={"ID":"59990591-2248-489b-bac2-e7cab22482f8","Type":"ContainerStarted","Data":"f626d676ce0b2dbd85f858b166fb0050d475783a83143a42e19f369ae37353e6"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.779958 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" event={"ID":"5b43b464-5623-46bb-8097-65b505d08960","Type":"ContainerStarted","Data":"5d47aec119b9bfe1604e8d488d64ba28c81374dd8415db475287c6760b603f34"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.802512 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m" event={"ID":"d69d0f34-1e03-438d-9d97-de945aff185f","Type":"ContainerStarted","Data":"b0ffad2cd3c45f0a4e916abe4e0753f6e6d92ab58c59073999ac49730b021db9"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.809784 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" event={"ID":"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d","Type":"ContainerStarted","Data":"239aea454323fbca3eb7b074809688382235f97c4aaeec9ff2a95a2f210123bf"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.809821 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" event={"ID":"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d","Type":"ContainerStarted","Data":"12fa26e22eeaf69b0062d177a21558837de011ed6da5184d7f1750e5b3ea0dd6"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.812680 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.812726 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" event={"ID":"bf91f3e9-19c2-4f18-b129-41aafd1a1264","Type":"ContainerStarted","Data":"25634892eeeb42d0ef66d036ba3180352e61cb89dc73ca05e000cddfc7ed5d5f"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.812772 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" event={"ID":"bf91f3e9-19c2-4f18-b129-41aafd1a1264","Type":"ContainerStarted","Data":"9e442ed8624abf7c7c008be60f767ce4757519be014cdfd4e95fe98d8969b767"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.812936 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:43 crc kubenswrapper[4842]: E0202 06:48:43.813056 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:44.31304023 +0000 UTC m=+149.690308142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.816337 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-pbtq6" event={"ID":"cc176201-02a2-46c0-903c-13943d989195","Type":"ContainerStarted","Data":"abb907cbbedc7828acfd06c8ee8bae680599c1c5999a4680cb0c9a6dee0b95ad"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.829291 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk" event={"ID":"7c4df1b8-c014-42db-ab26-6ac05f72c8ba","Type":"ContainerStarted","Data":"2dc282acc934af0f4a041ef148e88a9e0d6a5040600b529a7e6e282fd12e43b2"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.836573 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t"] Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.838473 4842 generic.go:334] "Generic (PLEG): container finished" podID="d8b4ca95-d26b-4f03-b095-b5096b6c3fbe" containerID="b7385cd6372928f96bc72bbc29e57087705ce0ea17acf32f23a5328a7a0b2ec4" exitCode=0 Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.838521 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" event={"ID":"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe","Type":"ContainerDied","Data":"b7385cd6372928f96bc72bbc29e57087705ce0ea17acf32f23a5328a7a0b2ec4"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.861370 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" event={"ID":"45dcaecb-f74e-4eaf-886a-28b6632f8d44","Type":"ContainerStarted","Data":"fe44bedac52c769b93786c7124dc2a65a35448b9d0da00c8e6691fabf5fe1c67"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.867765 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" event={"ID":"c7352a46-964e-478a-a141-7b1f3d529b85","Type":"ContainerStarted","Data":"ba883d0dbff2f8d72bcfa41bc18c26959b10543f2aee551d9c4325bf6653ef2e"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.868366 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.873279 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4" event={"ID":"f2ee0e33-a160-4303-af00-0b145647f807","Type":"ContainerStarted","Data":"ccb5c4e8c7fd3c61220db19517da2bd7a1b1f1f9f5c81cb9219024caa0cd37d7"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.876164 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp" event={"ID":"ceaf90b2-229c-4452-8a1b-fd016682bf6e","Type":"ContainerStarted","Data":"40a514a6aabc79b06fa62bd09dac8e951547078fe0891998d0bf0db2343e22b5"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.876190 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp" event={"ID":"ceaf90b2-229c-4452-8a1b-fd016682bf6e","Type":"ContainerStarted","Data":"13efea9185082b7d981af116b6c37c2792ed02efaff8abdff2ee0e301c453f7a"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.880511 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-j7bfz" event={"ID":"23594203-b17a-4d98-95da-a7c0e3a2ef4e","Type":"ContainerStarted","Data":"7c436f2068159849a1430e912428ed6855ffb1465ddb0bf1ae175ec4fa9e6eee"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.906836 4842 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-j9jgh container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.906891 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" podUID="091908d5-acab-418a-a5f2-fa909294222a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.907512 4842 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-rssw5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.907539 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" podUID="c7352a46-964e-478a-a141-7b1f3d529b85" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.907594 4842 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-hj5sv container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.27:6443/healthz\": dial tcp 10.217.0.27:6443: connect: connection refused" start-of-body= Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.907607 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" podUID="bf91f3e9-19c2-4f18-b129-41aafd1a1264" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.27:6443/healthz\": dial tcp 10.217.0.27:6443: connect: connection refused" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.914439 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:43 crc kubenswrapper[4842]: E0202 06:48:43.921007 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:44.42098744 +0000 UTC m=+149.798255352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.019103 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:44 crc kubenswrapper[4842]: E0202 06:48:44.023680 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:44.523657041 +0000 UTC m=+149.900924953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.121800 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:44 crc kubenswrapper[4842]: E0202 06:48:44.122179 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:44.622159881 +0000 UTC m=+149.999427783 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.223155 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:44 crc kubenswrapper[4842]: E0202 06:48:44.223822 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:44.723809038 +0000 UTC m=+150.101076940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.315733 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" podStartSLOduration=128.315710518 podStartE2EDuration="2m8.315710518s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:44.314625941 +0000 UTC m=+149.691893853" watchObservedRunningTime="2026-02-02 06:48:44.315710518 +0000 UTC m=+149.692978430" Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.326496 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:44 crc kubenswrapper[4842]: E0202 06:48:44.326944 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:44.8269251 +0000 UTC m=+150.204193012 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.360902 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk" podStartSLOduration=128.360877674 podStartE2EDuration="2m8.360877674s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:44.353265639 +0000 UTC m=+149.730533551" watchObservedRunningTime="2026-02-02 06:48:44.360877674 +0000 UTC m=+149.738145586" Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.405298 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-n42rc"] Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.429467 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:44 crc kubenswrapper[4842]: E0202 06:48:44.429873 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:44.929859568 +0000 UTC m=+150.307127480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.440018 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" podStartSLOduration=128.439995614 podStartE2EDuration="2m8.439995614s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:44.435687829 +0000 UTC m=+149.812955741" watchObservedRunningTime="2026-02-02 06:48:44.439995614 +0000 UTC m=+149.817263526" Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.500458 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m" podStartSLOduration=128.500429059 podStartE2EDuration="2m8.500429059s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:44.475043664 +0000 UTC m=+149.852311576" watchObservedRunningTime="2026-02-02 06:48:44.500429059 +0000 UTC m=+149.877696971" Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.505394 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bzsxn"] Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.514759 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" podStartSLOduration=128.514738456 podStartE2EDuration="2m8.514738456s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:44.508101465 +0000 UTC m=+149.885369377" watchObservedRunningTime="2026-02-02 06:48:44.514738456 +0000 UTC m=+149.892006368" Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.531651 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:44 crc kubenswrapper[4842]: E0202 06:48:44.532019 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:45.031995245 +0000 UTC m=+150.409263157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.581160 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp" podStartSLOduration=128.581136767 podStartE2EDuration="2m8.581136767s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:44.580690877 +0000 UTC m=+149.957958799" watchObservedRunningTime="2026-02-02 06:48:44.581136767 +0000 UTC m=+149.958404679" Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.581634 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-kmw8f" podStartSLOduration=128.581624909 podStartE2EDuration="2m8.581624909s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:44.554556392 +0000 UTC m=+149.931824314" watchObservedRunningTime="2026-02-02 06:48:44.581624909 +0000 UTC m=+149.958892821" Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.616324 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" podStartSLOduration=128.616300291 podStartE2EDuration="2m8.616300291s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:44.615997463 +0000 UTC m=+149.993265375" watchObservedRunningTime="2026-02-02 06:48:44.616300291 +0000 UTC m=+149.993568203" Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.632878 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:44 crc kubenswrapper[4842]: E0202 06:48:44.633181 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:45.13316689 +0000 UTC m=+150.510434802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.657523 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" podStartSLOduration=128.657505161 podStartE2EDuration="2m8.657505161s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:44.655629855 +0000 UTC m=+150.032897767" watchObservedRunningTime="2026-02-02 06:48:44.657505161 +0000 UTC m=+150.034773063" Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.733645 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:44 crc kubenswrapper[4842]: E0202 06:48:44.733969 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:45.233950336 +0000 UTC m=+150.611218248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.741375 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" podStartSLOduration=128.741356675 podStartE2EDuration="2m8.741356675s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:44.701782905 +0000 UTC m=+150.079050817" watchObservedRunningTime="2026-02-02 06:48:44.741356675 +0000 UTC m=+150.118624587" Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.742986 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-4rp8p" podStartSLOduration=128.742979915 podStartE2EDuration="2m8.742979915s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:44.740758911 +0000 UTC m=+150.118026813" watchObservedRunningTime="2026-02-02 06:48:44.742979915 +0000 UTC m=+150.120247827" Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.778824 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" podStartSLOduration=128.778803464 podStartE2EDuration="2m8.778803464s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:44.77739186 +0000 UTC m=+150.154659772" watchObservedRunningTime="2026-02-02 06:48:44.778803464 +0000 UTC m=+150.156071376" Feb 02 06:48:44 crc kubenswrapper[4842]: W0202 06:48:44.817255 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4f753a1_ecf0_4b2c_9121_989677c6b2a6.slice/crio-86551bfa40b78ac651aa4bb3b08214372121725e7903350eb4635288d82753ac WatchSource:0}: Error finding container 86551bfa40b78ac651aa4bb3b08214372121725e7903350eb4635288d82753ac: Status 404 returned error can't find the container with id 86551bfa40b78ac651aa4bb3b08214372121725e7903350eb4635288d82753ac Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.839281 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:44 crc kubenswrapper[4842]: E0202 06:48:44.839642 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:45.33962653 +0000 UTC m=+150.716894442 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.840609 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" podStartSLOduration=128.840589443 podStartE2EDuration="2m8.840589443s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:44.839932597 +0000 UTC m=+150.217200509" watchObservedRunningTime="2026-02-02 06:48:44.840589443 +0000 UTC m=+150.217857355" Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.943953 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:44 crc kubenswrapper[4842]: E0202 06:48:44.944814 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:45.444789062 +0000 UTC m=+150.822056974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.045846 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:45 crc kubenswrapper[4842]: E0202 06:48:45.046197 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:45.546182192 +0000 UTC m=+150.923450104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.066757 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" event={"ID":"c4f753a1-ecf0-4b2c-9121-989677c6b2a6","Type":"ContainerStarted","Data":"86551bfa40b78ac651aa4bb3b08214372121725e7903350eb4635288d82753ac"} Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.101948 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" event={"ID":"45dcaecb-f74e-4eaf-886a-28b6632f8d44","Type":"ContainerStarted","Data":"9893cfb13791ff92b87735366f0d73281bc502ec8f5c46d7a77e471885879a8b"} Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.105056 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" podStartSLOduration=129.10504096 podStartE2EDuration="2m9.10504096s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:44.943383278 +0000 UTC m=+150.320651190" watchObservedRunningTime="2026-02-02 06:48:45.10504096 +0000 UTC m=+150.482308872" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.106169 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hv9fc"] Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.117643 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b"] Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.147007 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:45 crc kubenswrapper[4842]: E0202 06:48:45.147662 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:45.647641404 +0000 UTC m=+151.024909316 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.147848 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnmkq" event={"ID":"99922ba3-dd03-4c94-9663-9c530f7b3ad0","Type":"ContainerStarted","Data":"e903d0c7179a7a8213973f57b1d8571980c1db1773ffcec965e4436bc5deecca"} Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.167769 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n9v5x" event={"ID":"e95addab-99c5-499c-92bc-f13fd4870710","Type":"ContainerStarted","Data":"bf8c3f93461b4f45026c3dbcf69102b55e4905f119339a26fbba42d9239f2b9a"} Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.174422 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n42rc" event={"ID":"bc8e3a2f-b630-40bf-865e-c7a035385730","Type":"ContainerStarted","Data":"633a96cf373218e4902f722440601e3e44ff539ab1dfcd396628e3317216b44a"} Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.196107 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-pbtq6" event={"ID":"cc176201-02a2-46c0-903c-13943d989195","Type":"ContainerStarted","Data":"11860d3d3dd36f702b7fbbac25a115db9fc5e69c5bae23b02fb07557a2fd8f8a"} Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.197397 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-pbtq6" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.204558 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9" event={"ID":"9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8","Type":"ContainerStarted","Data":"4f2df937c73158110bca83af94c7ca1466f862d31bb5d5ac9f1c617ca204a0ca"} Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.215618 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-j7bfz" event={"ID":"23594203-b17a-4d98-95da-a7c0e3a2ef4e","Type":"ContainerStarted","Data":"6a57a2d4264cf37752ab3da69a10983b86385f84e5fe9c2db99830075f52413a"} Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.221145 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kmw8f" event={"ID":"59990591-2248-489b-bac2-e7cab22482f8","Type":"ContainerStarted","Data":"87c6b411dfe277d9ab669c640478cf0b6070af5d629655273a23697ab8ba0434"} Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.230561 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm"] Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.248614 4842 patch_prober.go:28] interesting pod/downloads-7954f5f757-pbtq6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.248680 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-pbtq6" podUID="cc176201-02a2-46c0-903c-13943d989195" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.249550 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:45 crc kubenswrapper[4842]: E0202 06:48:45.250421 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:45.750398987 +0000 UTC m=+151.127666899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.258093 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" event={"ID":"57b85eac-df63-4c81-abe6-3dba293df9c2","Type":"ContainerStarted","Data":"d2fd60f59fddc30897ba37779de20ce7fb25833d572dcc1de237b74148cf5af6"} Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.280575 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j" event={"ID":"42ff05d2-dda3-411f-bcee-816f87ce21b8","Type":"ContainerStarted","Data":"c38cc69394295f586172b4acf019bfc50c159ccc330982f39cf94e1fe9b27683"} Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.285183 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" event={"ID":"e4367135-ecb4-447d-a89e-5dcbeffe345e","Type":"ContainerStarted","Data":"93ec7525bee512d972c992202015e6a305802c186439d4e8975bf16153a14c8f"} Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.352669 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:45 crc kubenswrapper[4842]: E0202 06:48:45.354691 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:45.854668878 +0000 UTC m=+151.231936790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.356624 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-pbtq6" podStartSLOduration=129.356608325 podStartE2EDuration="2m9.356608325s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:45.338010753 +0000 UTC m=+150.715278675" watchObservedRunningTime="2026-02-02 06:48:45.356608325 +0000 UTC m=+150.733876237" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.380374 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-z2sjd"] Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.384793 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" event={"ID":"fd96d668-a9b2-474f-8617-17eca5f01191","Type":"ContainerStarted","Data":"47646ec9237caa84032e8451e41a413bfcc66da7a9af859fc66fe722176c041e"} Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.400605 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" event={"ID":"5b43b464-5623-46bb-8097-65b505d08960","Type":"ContainerStarted","Data":"ba19112a26c109422079efb77e0284d9fe51d522c7191998e89b078a7d34963e"} Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.416756 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j"] Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.421999 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m" event={"ID":"d69d0f34-1e03-438d-9d97-de945aff185f","Type":"ContainerStarted","Data":"effb92735b3e6afb10c9dc8774289f46b7283dacd07a69849dd78dcdb2d304b3"} Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.454180 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:45 crc kubenswrapper[4842]: E0202 06:48:45.455195 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:45.955173606 +0000 UTC m=+151.332441518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.475998 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" event={"ID":"1b0e61a0-72dd-4edd-8217-c7b157e2c38c","Type":"ContainerStarted","Data":"a05baceb8c51022ca5f91f8419a01be0cc4ac107e436d2ab4eb90aeddd510ff6"} Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.476046 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.479520 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6fhk9"] Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.479946 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-j7bfz" podStartSLOduration=129.479919387 podStartE2EDuration="2m9.479919387s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:45.475042878 +0000 UTC m=+150.852310790" watchObservedRunningTime="2026-02-02 06:48:45.479919387 +0000 UTC m=+150.857187299" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.489783 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.509694 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.517871 4842 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-n6n4t container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.517932 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" podUID="1b0e61a0-72dd-4edd-8217-c7b157e2c38c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.530364 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.548195 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.566360 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:45 crc kubenswrapper[4842]: E0202 06:48:45.583415 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:46.083389348 +0000 UTC m=+151.460657260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.605386 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kb6j9"] Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.616831 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" podStartSLOduration=130.616802658 podStartE2EDuration="2m10.616802658s" podCreationTimestamp="2026-02-02 06:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:45.60493069 +0000 UTC m=+150.982198602" watchObservedRunningTime="2026-02-02 06:48:45.616802658 +0000 UTC m=+150.994070570" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.654054 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps"] Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.678741 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-4rp8p" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.686983 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kgv82"] Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.694336 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:45 crc kubenswrapper[4842]: E0202 06:48:45.694876 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:46.194860833 +0000 UTC m=+151.572128745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:45 crc kubenswrapper[4842]: W0202 06:48:45.707722 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8cad1e4_b070_477e_a20a_5cf8cb397e85.slice/crio-4acd1a33eb07c5ae32cf1b6d6c9698a092192e69a2e1034f70efacfa7093a85e WatchSource:0}: Error finding container 4acd1a33eb07c5ae32cf1b6d6c9698a092192e69a2e1034f70efacfa7093a85e: Status 404 returned error can't find the container with id 4acd1a33eb07c5ae32cf1b6d6c9698a092192e69a2e1034f70efacfa7093a85e Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.736685 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" podStartSLOduration=129.736663737 podStartE2EDuration="2m9.736663737s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:45.690540108 +0000 UTC m=+151.067808020" watchObservedRunningTime="2026-02-02 06:48:45.736663737 +0000 UTC m=+151.113931649" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.739355 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.745575 4842 patch_prober.go:28] interesting pod/router-default-5444994796-j7bfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 06:48:45 crc kubenswrapper[4842]: [-]has-synced failed: reason withheld Feb 02 06:48:45 crc kubenswrapper[4842]: [+]process-running ok Feb 02 06:48:45 crc kubenswrapper[4842]: healthz check failed Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.745636 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j7bfz" podUID="23594203-b17a-4d98-95da-a7c0e3a2ef4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.801517 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:45 crc kubenswrapper[4842]: E0202 06:48:45.801810 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:46.301791307 +0000 UTC m=+151.679059219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.822032 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" podStartSLOduration=129.822010008 podStartE2EDuration="2m9.822010008s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:45.820898711 +0000 UTC m=+151.198166623" watchObservedRunningTime="2026-02-02 06:48:45.822010008 +0000 UTC m=+151.199277920" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.883370 4842 csr.go:261] certificate signing request csr-sclbq is approved, waiting to be issued Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.883690 4842 csr.go:257] certificate signing request csr-sclbq is issued Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.904896 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:45 crc kubenswrapper[4842]: E0202 06:48:45.906022 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:46.406002906 +0000 UTC m=+151.783270818 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.029630 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:46 crc kubenswrapper[4842]: E0202 06:48:46.031128 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:46.531098222 +0000 UTC m=+151.908366124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.131954 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:46 crc kubenswrapper[4842]: E0202 06:48:46.132397 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:46.632381319 +0000 UTC m=+152.009649231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.233010 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:46 crc kubenswrapper[4842]: E0202 06:48:46.233268 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:46.733243927 +0000 UTC m=+152.110511849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.233370 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:46 crc kubenswrapper[4842]: E0202 06:48:46.233830 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:46.733822291 +0000 UTC m=+152.111090203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.334812 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:46 crc kubenswrapper[4842]: E0202 06:48:46.335399 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:46.835371805 +0000 UTC m=+152.212639717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.335529 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:46 crc kubenswrapper[4842]: E0202 06:48:46.336267 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:46.836259696 +0000 UTC m=+152.213527598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.438692 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:46 crc kubenswrapper[4842]: E0202 06:48:46.438920 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:46.938900487 +0000 UTC m=+152.316168389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.439059 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:46 crc kubenswrapper[4842]: E0202 06:48:46.439461 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:46.93944885 +0000 UTC m=+152.316716762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.449352 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-74vp9"] Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.450396 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.457003 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-74vp9"] Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.457478 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.541123 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.541737 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/671957e9-c40d-416d-8756-a4d7f0abc317-utilities\") pod \"certified-operators-74vp9\" (UID: \"671957e9-c40d-416d-8756-a4d7f0abc317\") " pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.541771 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8v2l\" (UniqueName: \"kubernetes.io/projected/671957e9-c40d-416d-8756-a4d7f0abc317-kube-api-access-p8v2l\") pod \"certified-operators-74vp9\" (UID: \"671957e9-c40d-416d-8756-a4d7f0abc317\") " pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.541807 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/671957e9-c40d-416d-8756-a4d7f0abc317-catalog-content\") pod \"certified-operators-74vp9\" (UID: \"671957e9-c40d-416d-8756-a4d7f0abc317\") " pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:48:46 crc kubenswrapper[4842]: E0202 06:48:46.541918 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:47.041893766 +0000 UTC m=+152.419161678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.555803 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hv9fc" event={"ID":"9f265e28-d9d2-43db-b43b-8f7d778b2fa5","Type":"ContainerStarted","Data":"15f42a690f24dad2e8e12cbd87e95b5de1963351a99e5f92a66c822bb93e2a42"} Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.555879 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hv9fc" event={"ID":"9f265e28-d9d2-43db-b43b-8f7d778b2fa5","Type":"ContainerStarted","Data":"4237fb3fc2c0d0427905882c8ea87076a58d31ccd42f534017bd8f4a62869000"} Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.605226 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z2sjd" event={"ID":"3c976fbc-6a91-494d-8d9e-1abe8119acf9","Type":"ContainerStarted","Data":"a9698972c91998c77dcd7c672110e16872ab1ca222eab182dc3cdc9e1a6629e0"} Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.645427 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.645480 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/671957e9-c40d-416d-8756-a4d7f0abc317-utilities\") pod \"certified-operators-74vp9\" (UID: \"671957e9-c40d-416d-8756-a4d7f0abc317\") " pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.645524 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8v2l\" (UniqueName: \"kubernetes.io/projected/671957e9-c40d-416d-8756-a4d7f0abc317-kube-api-access-p8v2l\") pod \"certified-operators-74vp9\" (UID: \"671957e9-c40d-416d-8756-a4d7f0abc317\") " pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.645577 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/671957e9-c40d-416d-8756-a4d7f0abc317-catalog-content\") pod \"certified-operators-74vp9\" (UID: \"671957e9-c40d-416d-8756-a4d7f0abc317\") " pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.647064 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/671957e9-c40d-416d-8756-a4d7f0abc317-catalog-content\") pod \"certified-operators-74vp9\" (UID: \"671957e9-c40d-416d-8756-a4d7f0abc317\") " pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.648872 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/671957e9-c40d-416d-8756-a4d7f0abc317-utilities\") pod \"certified-operators-74vp9\" (UID: \"671957e9-c40d-416d-8756-a4d7f0abc317\") " pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:48:46 crc kubenswrapper[4842]: E0202 06:48:46.662298 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:47.162269677 +0000 UTC m=+152.539537589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.663335 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" event={"ID":"c4f753a1-ecf0-4b2c-9121-989677c6b2a6","Type":"ContainerStarted","Data":"817668898fab5e51b3abf3f80425b72d1a70674bf923b8b7745e92d2599cc31a"} Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.665278 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.672772 4842 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bzsxn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.672873 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" podUID="c4f753a1-ecf0-4b2c-9121-989677c6b2a6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.693902 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z5jt7"] Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.697090 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm" event={"ID":"966b8965-4dbb-4735-9564-eac0652fa990","Type":"ContainerStarted","Data":"49c8dab4096bc22d0214ccc074500be0de11bfa62de290221a3661baa279c956"} Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.697230 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm" event={"ID":"966b8965-4dbb-4735-9564-eac0652fa990","Type":"ContainerStarted","Data":"5c6b84f9f11dcf696a0f508631b9436f8b8bd39ab4a0c268b86ed1e8f1857af6"} Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.697414 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.701121 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.726979 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z5jt7"] Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.738839 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnmkq" event={"ID":"99922ba3-dd03-4c94-9663-9c530f7b3ad0","Type":"ContainerStarted","Data":"ded980eed0bdc6282da6593565d27076ddee0dc4971ef792b14482b8d4fdf695"} Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.740963 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8v2l\" (UniqueName: \"kubernetes.io/projected/671957e9-c40d-416d-8756-a4d7f0abc317-kube-api-access-p8v2l\") pod \"certified-operators-74vp9\" (UID: \"671957e9-c40d-416d-8756-a4d7f0abc317\") " pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.746883 4842 patch_prober.go:28] interesting pod/router-default-5444994796-j7bfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 06:48:46 crc kubenswrapper[4842]: [-]has-synced failed: reason withheld Feb 02 06:48:46 crc kubenswrapper[4842]: [+]process-running ok Feb 02 06:48:46 crc kubenswrapper[4842]: healthz check failed Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.747420 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j7bfz" podUID="23594203-b17a-4d98-95da-a7c0e3a2ef4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.748117 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:46 crc kubenswrapper[4842]: E0202 06:48:46.756064 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:47.256037933 +0000 UTC m=+152.633305845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.804375 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"36431c07d80df4215cfbde2d713a5ce005a80527310444090090b9c5f928ad31"} Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.810915 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.816617 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9mdpt"] Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.828133 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" event={"ID":"90441cdf-d9ad-48d8-a400-9c770bc81a60","Type":"ContainerStarted","Data":"508a7b34a0de2d6d36e3a3b6ffdac868bd6d1323451256fd8c1bfac6ac424442"} Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.828312 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.855862 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-catalog-content\") pod \"community-operators-z5jt7\" (UID: \"69e94ec9-2a3b-4f85-a2b7-9e2f07359890\") " pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.855923 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q662f\" (UniqueName: \"kubernetes.io/projected/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-kube-api-access-q662f\") pod \"community-operators-z5jt7\" (UID: \"69e94ec9-2a3b-4f85-a2b7-9e2f07359890\") " pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.855947 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-utilities\") pod \"community-operators-z5jt7\" (UID: \"69e94ec9-2a3b-4f85-a2b7-9e2f07359890\") " pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.856006 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:46 crc kubenswrapper[4842]: E0202 06:48:46.859049 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:47.359029482 +0000 UTC m=+152.736297394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.885396 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-02 06:43:45 +0000 UTC, rotation deadline is 2026-11-15 17:24:34.990698022 +0000 UTC Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.886372 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6874h35m48.104333327s for next certificate rotation Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.889003 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9mdpt"] Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.889060 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" event={"ID":"b7ceecfd-f2a9-4c82-85de-e32eb001eb2b","Type":"ContainerStarted","Data":"b57b600b75ae9681e28f07052aa1148c25502ec64e1dd29ac9424ff3806f45de"} Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.889091 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" event={"ID":"b7ceecfd-f2a9-4c82-85de-e32eb001eb2b","Type":"ContainerStarted","Data":"a7520f027b387f857942f3f76d19851d0d7a5cd9a741cd5666b24c66f48ef91e"} Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.889755 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.904504 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" podStartSLOduration=130.904475115 podStartE2EDuration="2m10.904475115s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:46.885098044 +0000 UTC m=+152.262365956" watchObservedRunningTime="2026-02-02 06:48:46.904475115 +0000 UTC m=+152.281743027" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.906171 4842 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-z8q7b container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.906269 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" podUID="b7ceecfd-f2a9-4c82-85de-e32eb001eb2b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.926740 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4" event={"ID":"f2ee0e33-a160-4303-af00-0b145647f807","Type":"ContainerStarted","Data":"1668fa4ee8ccd649a60667059e75b5e87cc153d6a088ea4159a7ed346889e106"} Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.953047 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9" event={"ID":"9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8","Type":"ContainerStarted","Data":"5ca29137122a39b9ac957c77342534c76b8b34092851c7595d6f8f3c7cc5b828"} Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.955875 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"5f9169ec9aff7d5034c7afdc8458f4af1bc2732017b6de7bc063ad3ed4561a8c"} Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.956016 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d92225d2ec35a17728862e902cfa1ead30114e942875a61db9b6f6d198f4a6c9"} Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.956611 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.956875 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-catalog-content\") pod \"community-operators-z5jt7\" (UID: \"69e94ec9-2a3b-4f85-a2b7-9e2f07359890\") " pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.956908 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q662f\" (UniqueName: \"kubernetes.io/projected/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-kube-api-access-q662f\") pod \"community-operators-z5jt7\" (UID: \"69e94ec9-2a3b-4f85-a2b7-9e2f07359890\") " pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.956932 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-utilities\") pod \"community-operators-z5jt7\" (UID: \"69e94ec9-2a3b-4f85-a2b7-9e2f07359890\") " pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.956961 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0401543d-1af2-45fd-a8e1-05cec083bdd7-utilities\") pod \"certified-operators-9mdpt\" (UID: \"0401543d-1af2-45fd-a8e1-05cec083bdd7\") " pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.957136 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0401543d-1af2-45fd-a8e1-05cec083bdd7-catalog-content\") pod \"certified-operators-9mdpt\" (UID: \"0401543d-1af2-45fd-a8e1-05cec083bdd7\") " pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.957225 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtcmj\" (UniqueName: \"kubernetes.io/projected/0401543d-1af2-45fd-a8e1-05cec083bdd7-kube-api-access-dtcmj\") pod \"certified-operators-9mdpt\" (UID: \"0401543d-1af2-45fd-a8e1-05cec083bdd7\") " pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:48:46 crc kubenswrapper[4842]: E0202 06:48:46.957394 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:47.457367268 +0000 UTC m=+152.834635180 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.970676 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-utilities\") pod \"community-operators-z5jt7\" (UID: \"69e94ec9-2a3b-4f85-a2b7-9e2f07359890\") " pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.970998 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-catalog-content\") pod \"community-operators-z5jt7\" (UID: \"69e94ec9-2a3b-4f85-a2b7-9e2f07359890\") " pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.979431 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kgv82" event={"ID":"6d58ee7c-c176-4ddd-af48-d9406f4eac74","Type":"ContainerStarted","Data":"daa8a7174846994a23811ef2ed7deb05c3720d0567ad2065fea09b3d52e6f730"} Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.985014 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm" podStartSLOduration=130.984991128 podStartE2EDuration="2m10.984991128s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:46.979945186 +0000 UTC m=+152.357213098" watchObservedRunningTime="2026-02-02 06:48:46.984991128 +0000 UTC m=+152.362259030" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.000385 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5wqx2" event={"ID":"bf3383aa-e821-4389-b2f0-cc697ad4cc7a","Type":"ContainerStarted","Data":"d0bde4d8c2cd6144f08674c306929b9ff613065e33f6d8a0333e2008f2ca9c4d"} Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.011562 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" event={"ID":"1b0e61a0-72dd-4edd-8217-c7b157e2c38c","Type":"ContainerStarted","Data":"f7aea7e5c9085437bb918b8a8754534e83d2838ab9e4b1d44de64b0ff655b5e7"} Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.024150 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-hv9fc" podStartSLOduration=131.024130478 podStartE2EDuration="2m11.024130478s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:47.020157202 +0000 UTC m=+152.397425104" watchObservedRunningTime="2026-02-02 06:48:47.024130478 +0000 UTC m=+152.401398380" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.035351 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j" event={"ID":"29629b99-9606-4830-9623-8c81cecbd0a9","Type":"ContainerStarted","Data":"e959fdbbf0951e98dbf0fb8a34fd65e9e378852406a95f0339e6592f27d19356"} Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.035405 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j" event={"ID":"29629b99-9606-4830-9623-8c81cecbd0a9","Type":"ContainerStarted","Data":"c918c1f859ec2b36508dd23d07a42d9b1413d0bf48f4e9bd3000d0775f5c8c22"} Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.035427 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.047936 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l9qkz"] Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.054093 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.058939 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0401543d-1af2-45fd-a8e1-05cec083bdd7-catalog-content\") pod \"certified-operators-9mdpt\" (UID: \"0401543d-1af2-45fd-a8e1-05cec083bdd7\") " pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.058976 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.059063 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtcmj\" (UniqueName: \"kubernetes.io/projected/0401543d-1af2-45fd-a8e1-05cec083bdd7-kube-api-access-dtcmj\") pod \"certified-operators-9mdpt\" (UID: \"0401543d-1af2-45fd-a8e1-05cec083bdd7\") " pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.059147 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0401543d-1af2-45fd-a8e1-05cec083bdd7-utilities\") pod \"certified-operators-9mdpt\" (UID: \"0401543d-1af2-45fd-a8e1-05cec083bdd7\") " pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.059923 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0401543d-1af2-45fd-a8e1-05cec083bdd7-utilities\") pod \"certified-operators-9mdpt\" (UID: \"0401543d-1af2-45fd-a8e1-05cec083bdd7\") " pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:48:47 crc kubenswrapper[4842]: E0202 06:48:47.061005 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:47.560988282 +0000 UTC m=+152.938256194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.061116 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0401543d-1af2-45fd-a8e1-05cec083bdd7-catalog-content\") pod \"certified-operators-9mdpt\" (UID: \"0401543d-1af2-45fd-a8e1-05cec083bdd7\") " pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.079053 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnmkq" podStartSLOduration=131.07902019 podStartE2EDuration="2m11.07902019s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:47.056206526 +0000 UTC m=+152.433474448" watchObservedRunningTime="2026-02-02 06:48:47.07902019 +0000 UTC m=+152.456288102" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.080485 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q662f\" (UniqueName: \"kubernetes.io/projected/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-kube-api-access-q662f\") pod \"community-operators-z5jt7\" (UID: \"69e94ec9-2a3b-4f85-a2b7-9e2f07359890\") " pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.082502 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l9qkz"] Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.113329 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtcmj\" (UniqueName: \"kubernetes.io/projected/0401543d-1af2-45fd-a8e1-05cec083bdd7-kube-api-access-dtcmj\") pod \"certified-operators-9mdpt\" (UID: \"0401543d-1af2-45fd-a8e1-05cec083bdd7\") " pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.132486 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-h6pjl" event={"ID":"27bce4a1-799c-4d40-900c-455eaba28398","Type":"ContainerStarted","Data":"2cd70c383102200c10b046ce7a0cd1c1f1076c2986f23a7769899d249ec23a02"} Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.160248 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.160477 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrqbw\" (UniqueName: \"kubernetes.io/projected/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-kube-api-access-mrqbw\") pod \"community-operators-l9qkz\" (UID: \"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb\") " pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.160560 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-utilities\") pod \"community-operators-l9qkz\" (UID: \"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb\") " pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.160586 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-catalog-content\") pod \"community-operators-l9qkz\" (UID: \"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb\") " pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:48:47 crc kubenswrapper[4842]: E0202 06:48:47.162028 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:47.662004764 +0000 UTC m=+153.039272676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.173308 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j" podStartSLOduration=131.173292118 podStartE2EDuration="2m11.173292118s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:47.12889269 +0000 UTC m=+152.506160602" watchObservedRunningTime="2026-02-02 06:48:47.173292118 +0000 UTC m=+152.550560030" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.174706 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4" podStartSLOduration=131.174700082 podStartE2EDuration="2m11.174700082s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:47.161623414 +0000 UTC m=+152.538891316" watchObservedRunningTime="2026-02-02 06:48:47.174700082 +0000 UTC m=+152.551967994" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.190826 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" event={"ID":"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe","Type":"ContainerStarted","Data":"04ffd2243b5849ee630dca21e47581a608d351fcdc3dc93a8251781dde7ea1c2"} Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.207753 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.263377 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.263426 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-utilities\") pod \"community-operators-l9qkz\" (UID: \"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb\") " pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.263462 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-catalog-content\") pod \"community-operators-l9qkz\" (UID: \"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb\") " pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.263524 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrqbw\" (UniqueName: \"kubernetes.io/projected/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-kube-api-access-mrqbw\") pod \"community-operators-l9qkz\" (UID: \"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb\") " pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:48:47 crc kubenswrapper[4842]: E0202 06:48:47.265038 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:47.765016813 +0000 UTC m=+153.142284715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.265117 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-utilities\") pod \"community-operators-l9qkz\" (UID: \"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb\") " pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.265194 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-catalog-content\") pod \"community-operators-l9qkz\" (UID: \"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb\") " pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.265255 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" event={"ID":"10f8b640-1372-484f-b42f-97e336fb2992","Type":"ContainerStarted","Data":"900ed9927278d1dc592519743974292fe020484c481cf898486b447ec27bf41e"} Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.285569 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-5wqx2" podStartSLOduration=131.285535321 podStartE2EDuration="2m11.285535321s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:47.282260682 +0000 UTC m=+152.659528594" watchObservedRunningTime="2026-02-02 06:48:47.285535321 +0000 UTC m=+152.662803233" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.295486 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n9v5x" event={"ID":"e95addab-99c5-499c-92bc-f13fd4870710","Type":"ContainerStarted","Data":"606f6007502976165b22ae007e25e33f729187b8fa70583e2fb41ce07404a6cb"} Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.320917 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" event={"ID":"a8cad1e4-b070-477e-a20a-5cf8cb397e85","Type":"ContainerStarted","Data":"4acd1a33eb07c5ae32cf1b6d6c9698a092192e69a2e1034f70efacfa7093a85e"} Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.332341 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" podStartSLOduration=131.332322687 podStartE2EDuration="2m11.332322687s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:47.331185589 +0000 UTC m=+152.708453521" watchObservedRunningTime="2026-02-02 06:48:47.332322687 +0000 UTC m=+152.709590599" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.360381 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrqbw\" (UniqueName: \"kubernetes.io/projected/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-kube-api-access-mrqbw\") pod \"community-operators-l9qkz\" (UID: \"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb\") " pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.364426 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.379876 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:47 crc kubenswrapper[4842]: E0202 06:48:47.380562 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:47.880504646 +0000 UTC m=+153.257772558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.380934 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:47 crc kubenswrapper[4842]: E0202 06:48:47.388110 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:47.88809033 +0000 UTC m=+153.265358242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.394577 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9" podStartSLOduration=131.394551136 podStartE2EDuration="2m11.394551136s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:47.375760451 +0000 UTC m=+152.753028363" watchObservedRunningTime="2026-02-02 06:48:47.394551136 +0000 UTC m=+152.771819048" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.406542 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-m2mqz" event={"ID":"14030278-3de4-4425-8308-813d4f7c0a2d","Type":"ContainerStarted","Data":"3f28817203371b56e81eb787ae3bd71bff7d3c630b4bc529c1ff7107d2cb9d14"} Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.453019 4842 generic.go:334] "Generic (PLEG): container finished" podID="57b85eac-df63-4c81-abe6-3dba293df9c2" containerID="d2fd60f59fddc30897ba37779de20ce7fb25833d572dcc1de237b74148cf5af6" exitCode=0 Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.453738 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" event={"ID":"57b85eac-df63-4c81-abe6-3dba293df9c2","Type":"ContainerDied","Data":"d2fd60f59fddc30897ba37779de20ce7fb25833d572dcc1de237b74148cf5af6"} Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.453765 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.453775 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" event={"ID":"57b85eac-df63-4c81-abe6-3dba293df9c2","Type":"ContainerStarted","Data":"3b4dbc3751ec24a7f4a8ae73a64ea7c63704029c634d0e7e87f555b8b9d21c56"} Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.477612 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kb6j9" event={"ID":"86a554b4-30b1-4521-8677-d1974308a379","Type":"ContainerStarted","Data":"de02899c60928ac9c2bedfa6fdefa8efa363483658a025a263c5ed9b9d6e0344"} Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.483269 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:47 crc kubenswrapper[4842]: E0202 06:48:47.484439 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:47.984418307 +0000 UTC m=+153.361686219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.503586 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n42rc" event={"ID":"bc8e3a2f-b630-40bf-865e-c7a035385730","Type":"ContainerStarted","Data":"fac8f7f747549abd71c8fb62a9d629c838529476ee3e56587ee47eb6820e973b"} Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.504395 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.518026 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"12493dcc1ca2c5aeb5273cd5a3222736513b0191aee70f6150b75b9bd0692df1"} Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.519686 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-m2mqz" podStartSLOduration=8.519675333 podStartE2EDuration="8.519675333s" podCreationTimestamp="2026-02-02 06:48:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:47.519068158 +0000 UTC m=+152.896336070" watchObservedRunningTime="2026-02-02 06:48:47.519675333 +0000 UTC m=+152.896943245" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.521811 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-h6pjl" podStartSLOduration=131.521802344 podStartE2EDuration="2m11.521802344s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:47.47833381 +0000 UTC m=+152.855601722" watchObservedRunningTime="2026-02-02 06:48:47.521802344 +0000 UTC m=+152.899070256" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.565563 4842 patch_prober.go:28] interesting pod/downloads-7954f5f757-pbtq6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.565796 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-pbtq6" podUID="cc176201-02a2-46c0-903c-13943d989195" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.566026 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j" event={"ID":"42ff05d2-dda3-411f-bcee-816f87ce21b8","Type":"ContainerStarted","Data":"6c3f6c2ad3db40219c62ae4bfa566bc8dd708b5bae52e7331ee9209d340c103f"} Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.577068 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" podStartSLOduration=131.57704609500001 podStartE2EDuration="2m11.577046095s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:47.57643354 +0000 UTC m=+152.953701462" watchObservedRunningTime="2026-02-02 06:48:47.577046095 +0000 UTC m=+152.954314007" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.586503 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:47 crc kubenswrapper[4842]: E0202 06:48:47.588073 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:48.088061722 +0000 UTC m=+153.465329634 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.624441 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.630591 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" podStartSLOduration=131.630574714 podStartE2EDuration="2m11.630574714s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:47.6287733 +0000 UTC m=+153.006041212" watchObservedRunningTime="2026-02-02 06:48:47.630574714 +0000 UTC m=+153.007842616" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.671687 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" podStartSLOduration=131.671668861 podStartE2EDuration="2m11.671668861s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:47.670236816 +0000 UTC m=+153.047504728" watchObservedRunningTime="2026-02-02 06:48:47.671668861 +0000 UTC m=+153.048936773" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.687436 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:47 crc kubenswrapper[4842]: E0202 06:48:47.689124 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:48.189108674 +0000 UTC m=+153.566376586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.748539 4842 patch_prober.go:28] interesting pod/router-default-5444994796-j7bfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 06:48:47 crc kubenswrapper[4842]: [-]has-synced failed: reason withheld Feb 02 06:48:47 crc kubenswrapper[4842]: [+]process-running ok Feb 02 06:48:47 crc kubenswrapper[4842]: healthz check failed Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.749118 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j7bfz" podUID="23594203-b17a-4d98-95da-a7c0e3a2ef4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.763117 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n9v5x" podStartSLOduration=131.763087339 podStartE2EDuration="2m11.763087339s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:47.707710835 +0000 UTC m=+153.084978747" watchObservedRunningTime="2026-02-02 06:48:47.763087339 +0000 UTC m=+153.140355251" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.793698 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:47 crc kubenswrapper[4842]: E0202 06:48:47.794182 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:48.294162873 +0000 UTC m=+153.671430785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.808692 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j" podStartSLOduration=131.808671835 podStartE2EDuration="2m11.808671835s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:47.804791061 +0000 UTC m=+153.182058993" watchObservedRunningTime="2026-02-02 06:48:47.808671835 +0000 UTC m=+153.185939747" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.884697 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-kb6j9" podStartSLOduration=8.88467854 podStartE2EDuration="8.88467854s" podCreationTimestamp="2026-02-02 06:48:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:47.848641455 +0000 UTC m=+153.225909377" watchObservedRunningTime="2026-02-02 06:48:47.88467854 +0000 UTC m=+153.261946452" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.894660 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:47 crc kubenswrapper[4842]: E0202 06:48:47.895509 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:48.395488272 +0000 UTC m=+153.772756184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.987755 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" podStartSLOduration=131.98773747 podStartE2EDuration="2m11.98773747s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:47.949420371 +0000 UTC m=+153.326688283" watchObservedRunningTime="2026-02-02 06:48:47.98773747 +0000 UTC m=+153.365005382" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.006356 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:48 crc kubenswrapper[4842]: E0202 06:48:48.006961 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:48.506943607 +0000 UTC m=+153.884211519 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.106976 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:48 crc kubenswrapper[4842]: E0202 06:48:48.107433 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:48.607398063 +0000 UTC m=+153.984665965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.107656 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:48 crc kubenswrapper[4842]: E0202 06:48:48.107942 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:48.607928156 +0000 UTC m=+153.985196058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.209876 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:48 crc kubenswrapper[4842]: E0202 06:48:48.210206 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:48.710170017 +0000 UTC m=+154.087437929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.292538 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n42rc" podStartSLOduration=132.292518485 podStartE2EDuration="2m12.292518485s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:48.003027942 +0000 UTC m=+153.380295854" watchObservedRunningTime="2026-02-02 06:48:48.292518485 +0000 UTC m=+153.669786397" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.293439 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z5jt7"] Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.311807 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:48 crc kubenswrapper[4842]: E0202 06:48:48.312245 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:48.812229894 +0000 UTC m=+154.189497806 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:48 crc kubenswrapper[4842]: W0202 06:48:48.364625 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69e94ec9_2a3b_4f85_a2b7_9e2f07359890.slice/crio-70b3737c860965567c6708a9ff4cb3684a5c902cd3e8826074cbb967adb64bfe WatchSource:0}: Error finding container 70b3737c860965567c6708a9ff4cb3684a5c902cd3e8826074cbb967adb64bfe: Status 404 returned error can't find the container with id 70b3737c860965567c6708a9ff4cb3684a5c902cd3e8826074cbb967adb64bfe Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.418459 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m2j5m"] Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.419602 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.449532 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.450515 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:48 crc kubenswrapper[4842]: E0202 06:48:48.450833 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:48.950816596 +0000 UTC m=+154.328084508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.452001 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2j5m"] Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.463165 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-74vp9"] Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.551797 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k4r4\" (UniqueName: \"kubernetes.io/projected/de569fea-56ca-4762-9a22-a12561c296b6-kube-api-access-8k4r4\") pod \"redhat-marketplace-m2j5m\" (UID: \"de569fea-56ca-4762-9a22-a12561c296b6\") " pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.551843 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de569fea-56ca-4762-9a22-a12561c296b6-utilities\") pod \"redhat-marketplace-m2j5m\" (UID: \"de569fea-56ca-4762-9a22-a12561c296b6\") " pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.551870 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.551912 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de569fea-56ca-4762-9a22-a12561c296b6-catalog-content\") pod \"redhat-marketplace-m2j5m\" (UID: \"de569fea-56ca-4762-9a22-a12561c296b6\") " pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:48:48 crc kubenswrapper[4842]: E0202 06:48:48.552206 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:49.052192306 +0000 UTC m=+154.429460228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.626729 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j" event={"ID":"29629b99-9606-4830-9623-8c81cecbd0a9","Type":"ContainerStarted","Data":"945bf35ca92d015136320b8a3950b16173f51580a293bfcd2e5d9e048b095e63"} Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.636947 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l9qkz"] Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.653101 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.653620 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k4r4\" (UniqueName: \"kubernetes.io/projected/de569fea-56ca-4762-9a22-a12561c296b6-kube-api-access-8k4r4\") pod \"redhat-marketplace-m2j5m\" (UID: \"de569fea-56ca-4762-9a22-a12561c296b6\") " pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.653718 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de569fea-56ca-4762-9a22-a12561c296b6-utilities\") pod \"redhat-marketplace-m2j5m\" (UID: \"de569fea-56ca-4762-9a22-a12561c296b6\") " pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:48:48 crc kubenswrapper[4842]: E0202 06:48:48.653860 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:49.153826113 +0000 UTC m=+154.531094025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.653941 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.654072 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de569fea-56ca-4762-9a22-a12561c296b6-catalog-content\") pod \"redhat-marketplace-m2j5m\" (UID: \"de569fea-56ca-4762-9a22-a12561c296b6\") " pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.654342 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de569fea-56ca-4762-9a22-a12561c296b6-utilities\") pod \"redhat-marketplace-m2j5m\" (UID: \"de569fea-56ca-4762-9a22-a12561c296b6\") " pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.654813 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de569fea-56ca-4762-9a22-a12561c296b6-catalog-content\") pod \"redhat-marketplace-m2j5m\" (UID: \"de569fea-56ca-4762-9a22-a12561c296b6\") " pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:48:48 crc kubenswrapper[4842]: E0202 06:48:48.655089 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:49.155081313 +0000 UTC m=+154.532349225 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:48 crc kubenswrapper[4842]: W0202 06:48:48.670638 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1b2c621_4f86_4e6b_a1ec_02fc1c8113cb.slice/crio-5f20b78ac1d8de395289985ed057496cf0e32696d0cdab93b3ce9b9bfd17fab2 WatchSource:0}: Error finding container 5f20b78ac1d8de395289985ed057496cf0e32696d0cdab93b3ce9b9bfd17fab2: Status 404 returned error can't find the container with id 5f20b78ac1d8de395289985ed057496cf0e32696d0cdab93b3ce9b9bfd17fab2 Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.672302 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kb6j9" event={"ID":"86a554b4-30b1-4521-8677-d1974308a379","Type":"ContainerStarted","Data":"fc41b765d65b37e76e44dc881241523daded8e9098ec7d7110ba797ed3104865"} Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.681949 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kgv82" event={"ID":"6d58ee7c-c176-4ddd-af48-d9406f4eac74","Type":"ContainerStarted","Data":"464808a952f31995536654acb3e40458f6bdf1141a3826c80ebed195946cb223"} Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.681992 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kgv82" event={"ID":"6d58ee7c-c176-4ddd-af48-d9406f4eac74","Type":"ContainerStarted","Data":"fc4d29a0747e30e6da42f8f4d68ab645a40f4871c274145d334bf8b67df10e17"} Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.717186 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k4r4\" (UniqueName: \"kubernetes.io/projected/de569fea-56ca-4762-9a22-a12561c296b6-kube-api-access-8k4r4\") pod \"redhat-marketplace-m2j5m\" (UID: \"de569fea-56ca-4762-9a22-a12561c296b6\") " pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.736325 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5wqx2" event={"ID":"bf3383aa-e821-4389-b2f0-cc697ad4cc7a","Type":"ContainerStarted","Data":"d2239d85906d15d95c2bb6ad0bac6d6a4fa5210561871e6415d870eb980801b3"} Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.742720 4842 patch_prober.go:28] interesting pod/router-default-5444994796-j7bfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 06:48:48 crc kubenswrapper[4842]: [-]has-synced failed: reason withheld Feb 02 06:48:48 crc kubenswrapper[4842]: [+]process-running ok Feb 02 06:48:48 crc kubenswrapper[4842]: healthz check failed Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.742792 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j7bfz" podUID="23594203-b17a-4d98-95da-a7c0e3a2ef4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.751245 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kgv82" podStartSLOduration=132.751197615 podStartE2EDuration="2m12.751197615s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:48.748645773 +0000 UTC m=+154.125913685" watchObservedRunningTime="2026-02-02 06:48:48.751197615 +0000 UTC m=+154.128465527" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.757021 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:48 crc kubenswrapper[4842]: E0202 06:48:48.758108 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:49.258090793 +0000 UTC m=+154.635358705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.778627 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c64e0ba18824303759c485f51437b75ee8a74e6a8d4b944cc24f13e144ecbe12"} Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.804020 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.807308 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m6ms7"] Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.808245 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.821080 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"abc6df72344326b897c79dddbc777f5f79006f3f6d9b1ffb0a343fb984c0a1d8"} Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.821111 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.834575 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z2sjd" event={"ID":"3c976fbc-6a91-494d-8d9e-1abe8119acf9","Type":"ContainerStarted","Data":"b605dc68a1537d20b52455afc18f9c6874d5e4629bdf33a784ebdbad41479788"} Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.834634 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z2sjd" event={"ID":"3c976fbc-6a91-494d-8d9e-1abe8119acf9","Type":"ContainerStarted","Data":"16d191e4c3d20a62bbed26f5b987517c0d907aaca750f0de1b19d841076ab695"} Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.835287 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9mdpt"] Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.835382 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-z2sjd" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.842928 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-h6pjl" event={"ID":"27bce4a1-799c-4d40-900c-455eaba28398","Type":"ContainerStarted","Data":"23d9213aaa279c5ba91232afd0bcc353ea39ca32556a7b145af1723f6e7fdb89"} Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.847028 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m6ms7"] Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.857846 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:48 crc kubenswrapper[4842]: E0202 06:48:48.859110 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:49.359093683 +0000 UTC m=+154.736361595 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.872750 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" event={"ID":"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe","Type":"ContainerStarted","Data":"c506de4db9b35d63a13c91e4a7d3e3341423ad6d40865559c6c4ab2ba9c302bd"} Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.890898 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" event={"ID":"90441cdf-d9ad-48d8-a400-9c770bc81a60","Type":"ContainerStarted","Data":"b1939976e4e45123ce137bcc7b004566d4ab88bd67c2b1f240a5eee27ca61a78"} Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.892766 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74vp9" event={"ID":"671957e9-c40d-416d-8756-a4d7f0abc317","Type":"ContainerStarted","Data":"e77b162572adbddd868d73ee2b2382cf4886626b5d00d4cbd3b5a5a655acde51"} Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.905791 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5jt7" event={"ID":"69e94ec9-2a3b-4f85-a2b7-9e2f07359890","Type":"ContainerStarted","Data":"70b3737c860965567c6708a9ff4cb3684a5c902cd3e8826074cbb967adb64bfe"} Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.930960 4842 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.931193 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-z2sjd" podStartSLOduration=9.931177272 podStartE2EDuration="9.931177272s" podCreationTimestamp="2026-02-02 06:48:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:48.929801699 +0000 UTC m=+154.307069611" watchObservedRunningTime="2026-02-02 06:48:48.931177272 +0000 UTC m=+154.308445184" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.944997 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" event={"ID":"a8cad1e4-b070-477e-a20a-5cf8cb397e85","Type":"ContainerStarted","Data":"f519d41e2af002013e9c8a9601773eaefbe413047fe143e197cb8bf8279ee889"} Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.945034 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" event={"ID":"a8cad1e4-b070-477e-a20a-5cf8cb397e85","Type":"ContainerStarted","Data":"c8be51be5b86931f9db2151645a1d2b84329a6fafaf11f6e805c13133c9a85f5"} Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.946716 4842 patch_prober.go:28] interesting pod/downloads-7954f5f757-pbtq6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.946748 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-pbtq6" podUID="cc176201-02a2-46c0-903c-13943d989195" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.946807 4842 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bzsxn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.946820 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" podUID="c4f753a1-ecf0-4b2c-9121-989677c6b2a6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.953068 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.960741 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.961110 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwfcq\" (UniqueName: \"kubernetes.io/projected/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-kube-api-access-jwfcq\") pod \"redhat-marketplace-m6ms7\" (UID: \"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb\") " pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.961226 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-utilities\") pod \"redhat-marketplace-m6ms7\" (UID: \"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb\") " pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.961256 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-catalog-content\") pod \"redhat-marketplace-m6ms7\" (UID: \"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb\") " pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:48:48 crc kubenswrapper[4842]: E0202 06:48:48.962173 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:49.462152654 +0000 UTC m=+154.839420566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.047971 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.067281 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-utilities\") pod \"redhat-marketplace-m6ms7\" (UID: \"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb\") " pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.067410 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-catalog-content\") pod \"redhat-marketplace-m6ms7\" (UID: \"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb\") " pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.067448 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.067827 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwfcq\" (UniqueName: \"kubernetes.io/projected/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-kube-api-access-jwfcq\") pod \"redhat-marketplace-m6ms7\" (UID: \"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb\") " pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.068951 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-catalog-content\") pod \"redhat-marketplace-m6ms7\" (UID: \"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb\") " pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:48:49 crc kubenswrapper[4842]: E0202 06:48:49.069975 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:49.56996012 +0000 UTC m=+154.947228032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.077633 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-utilities\") pod \"redhat-marketplace-m6ms7\" (UID: \"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb\") " pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.138573 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwfcq\" (UniqueName: \"kubernetes.io/projected/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-kube-api-access-jwfcq\") pod \"redhat-marketplace-m6ms7\" (UID: \"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb\") " pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.159336 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.172923 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:49 crc kubenswrapper[4842]: E0202 06:48:49.173294 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:49.673264307 +0000 UTC m=+155.050532219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.285355 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:49 crc kubenswrapper[4842]: E0202 06:48:49.285817 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:49.785802388 +0000 UTC m=+155.163070300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.386262 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:49 crc kubenswrapper[4842]: E0202 06:48:49.386618 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:49.886584743 +0000 UTC m=+155.263852665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.488371 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:49 crc kubenswrapper[4842]: E0202 06:48:49.488821 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:49.988799964 +0000 UTC m=+155.366067876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.589720 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:49 crc kubenswrapper[4842]: E0202 06:48:49.589943 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:50.089903257 +0000 UTC m=+155.467171169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.589997 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:49 crc kubenswrapper[4842]: E0202 06:48:49.590403 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:50.090395889 +0000 UTC m=+155.467663791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.691350 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:49 crc kubenswrapper[4842]: E0202 06:48:49.691633 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:50.191588054 +0000 UTC m=+155.568855996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.737383 4842 patch_prober.go:28] interesting pod/router-default-5444994796-j7bfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 06:48:49 crc kubenswrapper[4842]: [-]has-synced failed: reason withheld Feb 02 06:48:49 crc kubenswrapper[4842]: [+]process-running ok Feb 02 06:48:49 crc kubenswrapper[4842]: healthz check failed Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.737478 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j7bfz" podUID="23594203-b17a-4d98-95da-a7c0e3a2ef4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.793728 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:49 crc kubenswrapper[4842]: E0202 06:48:49.794132 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:50.294114202 +0000 UTC m=+155.671382134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.895380 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:49 crc kubenswrapper[4842]: E0202 06:48:49.895682 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:50.395637836 +0000 UTC m=+155.772905778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.896115 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:49 crc kubenswrapper[4842]: E0202 06:48:49.896619 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:50.396604709 +0000 UTC m=+155.773872631 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.949122 4842 generic.go:334] "Generic (PLEG): container finished" podID="c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" containerID="1e25dc3d1edea490e1c8cd3b444d5b88a6502a90bad3cef321e8416ee23978b5" exitCode=0 Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.949173 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l9qkz" event={"ID":"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb","Type":"ContainerDied","Data":"1e25dc3d1edea490e1c8cd3b444d5b88a6502a90bad3cef321e8416ee23978b5"} Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.949244 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l9qkz" event={"ID":"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb","Type":"ContainerStarted","Data":"5f20b78ac1d8de395289985ed057496cf0e32696d0cdab93b3ce9b9bfd17fab2"} Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.950862 4842 generic.go:334] "Generic (PLEG): container finished" podID="671957e9-c40d-416d-8756-a4d7f0abc317" containerID="9bcffd62e37a672e39a6787f2c243578a0cd1be1df69a60bcc2f0670e3497e99" exitCode=0 Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.950934 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74vp9" event={"ID":"671957e9-c40d-416d-8756-a4d7f0abc317","Type":"ContainerDied","Data":"9bcffd62e37a672e39a6787f2c243578a0cd1be1df69a60bcc2f0670e3497e99"} Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.952389 4842 generic.go:334] "Generic (PLEG): container finished" podID="69e94ec9-2a3b-4f85-a2b7-9e2f07359890" containerID="fe4e6b5eae92ea98fb26f6084fef88f48ca6a4485abf0bfb20d4e4bb6702033a" exitCode=0 Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.952465 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5jt7" event={"ID":"69e94ec9-2a3b-4f85-a2b7-9e2f07359890","Type":"ContainerDied","Data":"fe4e6b5eae92ea98fb26f6084fef88f48ca6a4485abf0bfb20d4e4bb6702033a"} Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.955684 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9mdpt" event={"ID":"0401543d-1af2-45fd-a8e1-05cec083bdd7","Type":"ContainerStarted","Data":"ad1fd21c691dc675b62fad95a6e7e8ad52ebcb62e20c4eefb0dc3125badfd973"} Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.957342 4842 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bzsxn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.957379 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" podUID="c4f753a1-ecf0-4b2c-9121-989677c6b2a6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.998197 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:49 crc kubenswrapper[4842]: E0202 06:48:49.998372 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:50.498342638 +0000 UTC m=+155.875610560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.998822 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:49 crc kubenswrapper[4842]: E0202 06:48:49.999278 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:50.49926649 +0000 UTC m=+155.876534412 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.100334 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:50 crc kubenswrapper[4842]: E0202 06:48:50.100527 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:50.600499297 +0000 UTC m=+155.977767209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.101497 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:50 crc kubenswrapper[4842]: E0202 06:48:50.101831 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:50.601822959 +0000 UTC m=+155.979090871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.202674 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:50 crc kubenswrapper[4842]: E0202 06:48:50.202968 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:50.702904682 +0000 UTC m=+156.080172644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.252924 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2j5m"] Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.267282 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5l5m7"] Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.268625 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.272616 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.281234 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wjfbs"] Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.282364 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.292564 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5l5m7"] Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.297907 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wjfbs"] Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.306270 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:50 crc kubenswrapper[4842]: E0202 06:48:50.306689 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:50.80667125 +0000 UTC m=+156.183939162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.407951 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:50 crc kubenswrapper[4842]: E0202 06:48:50.408138 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:50.908110451 +0000 UTC m=+156.285378363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.410790 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be4c568-0aa4-4495-87b0-ec266872eb12-catalog-content\") pod \"redhat-operators-wjfbs\" (UID: \"7be4c568-0aa4-4495-87b0-ec266872eb12\") " pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.410854 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99088cf9-5dcc-4837-943b-4deca45c1401-catalog-content\") pod \"redhat-operators-5l5m7\" (UID: \"99088cf9-5dcc-4837-943b-4deca45c1401\") " pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.410930 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zgw2\" (UniqueName: \"kubernetes.io/projected/7be4c568-0aa4-4495-87b0-ec266872eb12-kube-api-access-8zgw2\") pod \"redhat-operators-wjfbs\" (UID: \"7be4c568-0aa4-4495-87b0-ec266872eb12\") " pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.410998 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.411037 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99088cf9-5dcc-4837-943b-4deca45c1401-utilities\") pod \"redhat-operators-5l5m7\" (UID: \"99088cf9-5dcc-4837-943b-4deca45c1401\") " pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.411073 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gfrg\" (UniqueName: \"kubernetes.io/projected/99088cf9-5dcc-4837-943b-4deca45c1401-kube-api-access-7gfrg\") pod \"redhat-operators-5l5m7\" (UID: \"99088cf9-5dcc-4837-943b-4deca45c1401\") " pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.411091 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be4c568-0aa4-4495-87b0-ec266872eb12-utilities\") pod \"redhat-operators-wjfbs\" (UID: \"7be4c568-0aa4-4495-87b0-ec266872eb12\") " pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:48:50 crc kubenswrapper[4842]: E0202 06:48:50.411464 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:50.911451442 +0000 UTC m=+156.288719344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.511758 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.511979 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zgw2\" (UniqueName: \"kubernetes.io/projected/7be4c568-0aa4-4495-87b0-ec266872eb12-kube-api-access-8zgw2\") pod \"redhat-operators-wjfbs\" (UID: \"7be4c568-0aa4-4495-87b0-ec266872eb12\") " pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.512031 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99088cf9-5dcc-4837-943b-4deca45c1401-utilities\") pod \"redhat-operators-5l5m7\" (UID: \"99088cf9-5dcc-4837-943b-4deca45c1401\") " pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.512054 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be4c568-0aa4-4495-87b0-ec266872eb12-utilities\") pod \"redhat-operators-wjfbs\" (UID: \"7be4c568-0aa4-4495-87b0-ec266872eb12\") " pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.512067 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gfrg\" (UniqueName: \"kubernetes.io/projected/99088cf9-5dcc-4837-943b-4deca45c1401-kube-api-access-7gfrg\") pod \"redhat-operators-5l5m7\" (UID: \"99088cf9-5dcc-4837-943b-4deca45c1401\") " pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.512107 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be4c568-0aa4-4495-87b0-ec266872eb12-catalog-content\") pod \"redhat-operators-wjfbs\" (UID: \"7be4c568-0aa4-4495-87b0-ec266872eb12\") " pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.512138 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99088cf9-5dcc-4837-943b-4deca45c1401-catalog-content\") pod \"redhat-operators-5l5m7\" (UID: \"99088cf9-5dcc-4837-943b-4deca45c1401\") " pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.512534 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99088cf9-5dcc-4837-943b-4deca45c1401-catalog-content\") pod \"redhat-operators-5l5m7\" (UID: \"99088cf9-5dcc-4837-943b-4deca45c1401\") " pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:48:50 crc kubenswrapper[4842]: E0202 06:48:50.512601 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:51.012587326 +0000 UTC m=+156.389855228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.514027 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be4c568-0aa4-4495-87b0-ec266872eb12-utilities\") pod \"redhat-operators-wjfbs\" (UID: \"7be4c568-0aa4-4495-87b0-ec266872eb12\") " pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.514535 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be4c568-0aa4-4495-87b0-ec266872eb12-catalog-content\") pod \"redhat-operators-wjfbs\" (UID: \"7be4c568-0aa4-4495-87b0-ec266872eb12\") " pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.514572 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99088cf9-5dcc-4837-943b-4deca45c1401-utilities\") pod \"redhat-operators-5l5m7\" (UID: \"99088cf9-5dcc-4837-943b-4deca45c1401\") " pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.547609 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m6ms7"] Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.549708 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zgw2\" (UniqueName: \"kubernetes.io/projected/7be4c568-0aa4-4495-87b0-ec266872eb12-kube-api-access-8zgw2\") pod \"redhat-operators-wjfbs\" (UID: \"7be4c568-0aa4-4495-87b0-ec266872eb12\") " pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.560641 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gfrg\" (UniqueName: \"kubernetes.io/projected/99088cf9-5dcc-4837-943b-4deca45c1401-kube-api-access-7gfrg\") pod \"redhat-operators-5l5m7\" (UID: \"99088cf9-5dcc-4837-943b-4deca45c1401\") " pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.613261 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:50 crc kubenswrapper[4842]: E0202 06:48:50.614181 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:51.114163131 +0000 UTC m=+156.491431043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.702024 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.709893 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.714498 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:50 crc kubenswrapper[4842]: E0202 06:48:50.714864 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:51.214849864 +0000 UTC m=+156.592117776 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.741059 4842 patch_prober.go:28] interesting pod/router-default-5444994796-j7bfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 06:48:50 crc kubenswrapper[4842]: [-]has-synced failed: reason withheld Feb 02 06:48:50 crc kubenswrapper[4842]: [+]process-running ok Feb 02 06:48:50 crc kubenswrapper[4842]: healthz check failed Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.741114 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j7bfz" podUID="23594203-b17a-4d98-95da-a7c0e3a2ef4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.816342 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:50 crc kubenswrapper[4842]: E0202 06:48:50.816682 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:51.316666415 +0000 UTC m=+156.693934327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.918275 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:50 crc kubenswrapper[4842]: E0202 06:48:50.918799 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:51.418775583 +0000 UTC m=+156.796043485 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.920237 4842 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.974540 4842 generic.go:334] "Generic (PLEG): container finished" podID="eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" containerID="d79b8cf4d7bb1113fe8f1b4ee67187f662ef997ced43c01af79821854dc7c65d" exitCode=0 Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.974609 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m6ms7" event={"ID":"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb","Type":"ContainerDied","Data":"d79b8cf4d7bb1113fe8f1b4ee67187f662ef997ced43c01af79821854dc7c65d"} Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.974633 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m6ms7" event={"ID":"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb","Type":"ContainerStarted","Data":"d839d2fe1ddee6dc1ee5e5c2514aaebc941a9e75e08e10d40cd5d9caf2627fd2"} Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.980235 4842 generic.go:334] "Generic (PLEG): container finished" podID="0401543d-1af2-45fd-a8e1-05cec083bdd7" containerID="1b665abd516c92090ff869fab9ed846ef67fb35ff96dbe511b66a77bb2b78db0" exitCode=0 Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.980282 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9mdpt" event={"ID":"0401543d-1af2-45fd-a8e1-05cec083bdd7","Type":"ContainerDied","Data":"1b665abd516c92090ff869fab9ed846ef67fb35ff96dbe511b66a77bb2b78db0"} Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.983824 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" event={"ID":"90441cdf-d9ad-48d8-a400-9c770bc81a60","Type":"ContainerStarted","Data":"fe859926fa724edf66bc512aa764eb09f7f815b4dfffab337894c3858c798aba"} Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.007839 4842 generic.go:334] "Generic (PLEG): container finished" podID="de569fea-56ca-4762-9a22-a12561c296b6" containerID="cf10c220f8e4c7c18d7b3b75f229bca5f01dcb18f6861f8710751c184d04121c" exitCode=0 Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.007917 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2j5m" event={"ID":"de569fea-56ca-4762-9a22-a12561c296b6","Type":"ContainerDied","Data":"cf10c220f8e4c7c18d7b3b75f229bca5f01dcb18f6861f8710751c184d04121c"} Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.007937 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2j5m" event={"ID":"de569fea-56ca-4762-9a22-a12561c296b6","Type":"ContainerStarted","Data":"281d01870ece6a3181561fda9dfe308cdde10657dccb47ecb2c8628297416b48"} Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.012922 4842 generic.go:334] "Generic (PLEG): container finished" podID="5b43b464-5623-46bb-8097-65b505d08960" containerID="ba19112a26c109422079efb77e0284d9fe51d522c7191998e89b078a7d34963e" exitCode=0 Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.013435 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" event={"ID":"5b43b464-5623-46bb-8097-65b505d08960","Type":"ContainerDied","Data":"ba19112a26c109422079efb77e0284d9fe51d522c7191998e89b078a7d34963e"} Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.019462 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:51 crc kubenswrapper[4842]: E0202 06:48:51.020902 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:51.520890621 +0000 UTC m=+156.898158533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.053694 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wjfbs"] Feb 02 06:48:51 crc kubenswrapper[4842]: W0202 06:48:51.117068 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7be4c568_0aa4_4495_87b0_ec266872eb12.slice/crio-4d9e0a84da8f191972cd048e101e3cd6029560ea1537fa6b0b79bb80a6aa52cf WatchSource:0}: Error finding container 4d9e0a84da8f191972cd048e101e3cd6029560ea1537fa6b0b79bb80a6aa52cf: Status 404 returned error can't find the container with id 4d9e0a84da8f191972cd048e101e3cd6029560ea1537fa6b0b79bb80a6aa52cf Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.121363 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:51 crc kubenswrapper[4842]: E0202 06:48:51.135953 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:51.635932912 +0000 UTC m=+157.013200824 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.137409 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5l5m7"] Feb 02 06:48:51 crc kubenswrapper[4842]: W0202 06:48:51.164004 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99088cf9_5dcc_4837_943b_4deca45c1401.slice/crio-535c1c949c7f7fddcdec8bd932015e6668761ecd24e167f9b71ea785616441c9 WatchSource:0}: Error finding container 535c1c949c7f7fddcdec8bd932015e6668761ecd24e167f9b71ea785616441c9: Status 404 returned error can't find the container with id 535c1c949c7f7fddcdec8bd932015e6668761ecd24e167f9b71ea785616441c9 Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.237055 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:51 crc kubenswrapper[4842]: E0202 06:48:51.237519 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:51.737504057 +0000 UTC m=+157.114771969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.338673 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:51 crc kubenswrapper[4842]: E0202 06:48:51.338860 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:51.838829376 +0000 UTC m=+157.216097288 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.339170 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:51 crc kubenswrapper[4842]: E0202 06:48:51.339506 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:51.839494552 +0000 UTC m=+157.216762464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.440181 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:51 crc kubenswrapper[4842]: E0202 06:48:51.440323 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:51.940301118 +0000 UTC m=+157.317569030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.440453 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:51 crc kubenswrapper[4842]: E0202 06:48:51.440773 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:51.940766759 +0000 UTC m=+157.318034671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.541399 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:51 crc kubenswrapper[4842]: E0202 06:48:51.541577 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:52.041550645 +0000 UTC m=+157.418818557 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.541791 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:51 crc kubenswrapper[4842]: E0202 06:48:51.542106 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:52.042098788 +0000 UTC m=+157.419366700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.643073 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:51 crc kubenswrapper[4842]: E0202 06:48:51.643273 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:52.143248481 +0000 UTC m=+157.520516393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.643398 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:51 crc kubenswrapper[4842]: E0202 06:48:51.643902 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:52.143892357 +0000 UTC m=+157.521160269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.656766 4842 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-02T06:48:50.920248608Z","Handler":null,"Name":""} Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.676289 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.678905 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.681492 4842 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.681551 4842 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.683544 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.737872 4842 patch_prober.go:28] interesting pod/router-default-5444994796-j7bfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 06:48:51 crc kubenswrapper[4842]: [-]has-synced failed: reason withheld Feb 02 06:48:51 crc kubenswrapper[4842]: [+]process-running ok Feb 02 06:48:51 crc kubenswrapper[4842]: healthz check failed Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.737943 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j7bfz" podUID="23594203-b17a-4d98-95da-a7c0e3a2ef4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.744736 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.749766 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.808737 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.808784 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.816865 4842 patch_prober.go:28] interesting pod/apiserver-76f77b778f-5dc9g container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 02 06:48:51 crc kubenswrapper[4842]: [+]log ok Feb 02 06:48:51 crc kubenswrapper[4842]: [+]etcd ok Feb 02 06:48:51 crc kubenswrapper[4842]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 02 06:48:51 crc kubenswrapper[4842]: [+]poststarthook/generic-apiserver-start-informers ok Feb 02 06:48:51 crc kubenswrapper[4842]: [+]poststarthook/max-in-flight-filter ok Feb 02 06:48:51 crc kubenswrapper[4842]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 02 06:48:51 crc kubenswrapper[4842]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 02 06:48:51 crc kubenswrapper[4842]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 02 06:48:51 crc kubenswrapper[4842]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Feb 02 06:48:51 crc kubenswrapper[4842]: [+]poststarthook/project.openshift.io-projectcache ok Feb 02 06:48:51 crc kubenswrapper[4842]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 02 06:48:51 crc kubenswrapper[4842]: [+]poststarthook/openshift.io-startinformers ok Feb 02 06:48:51 crc kubenswrapper[4842]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 02 06:48:51 crc kubenswrapper[4842]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 02 06:48:51 crc kubenswrapper[4842]: livez check failed Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.816911 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" podUID="d8b4ca95-d26b-4f03-b095-b5096b6c3fbe" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.851132 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.892278 4842 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.892323 4842 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.933289 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.989364 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.989996 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.991675 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.991955 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.995442 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.025164 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.042440 4842 generic.go:334] "Generic (PLEG): container finished" podID="99088cf9-5dcc-4837-943b-4deca45c1401" containerID="4762ff727f3a29ba6e1e6ee69579ecdb61b217f4f4f61f0b0baff1fd8408e164" exitCode=0 Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.042497 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5l5m7" event={"ID":"99088cf9-5dcc-4837-943b-4deca45c1401","Type":"ContainerDied","Data":"4762ff727f3a29ba6e1e6ee69579ecdb61b217f4f4f61f0b0baff1fd8408e164"} Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.042520 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5l5m7" event={"ID":"99088cf9-5dcc-4837-943b-4deca45c1401","Type":"ContainerStarted","Data":"535c1c949c7f7fddcdec8bd932015e6668761ecd24e167f9b71ea785616441c9"} Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.045182 4842 generic.go:334] "Generic (PLEG): container finished" podID="7be4c568-0aa4-4495-87b0-ec266872eb12" containerID="e5acdc10177108fa441e86a0649b2035781aef8bfbfa243aa0504a82b02bbf9f" exitCode=0 Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.045428 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjfbs" event={"ID":"7be4c568-0aa4-4495-87b0-ec266872eb12","Type":"ContainerDied","Data":"e5acdc10177108fa441e86a0649b2035781aef8bfbfa243aa0504a82b02bbf9f"} Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.045578 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjfbs" event={"ID":"7be4c568-0aa4-4495-87b0-ec266872eb12","Type":"ContainerStarted","Data":"4d9e0a84da8f191972cd048e101e3cd6029560ea1537fa6b0b79bb80a6aa52cf"} Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.049543 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" event={"ID":"90441cdf-d9ad-48d8-a400-9c770bc81a60","Type":"ContainerStarted","Data":"f15d9506e5b40443687fbab2e9220f3d1c689180ce0206bdcef9286524b11e16"} Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.049587 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" event={"ID":"90441cdf-d9ad-48d8-a400-9c770bc81a60","Type":"ContainerStarted","Data":"5f3fba8d88c022599a1cddd263d757e5bf2ae550c1bea15862cacb8fa3958b74"} Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.054805 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2298664c-b466-4829-bccf-8f5a49efafdb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2298664c-b466-4829-bccf-8f5a49efafdb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.054935 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2298664c-b466-4829-bccf-8f5a49efafdb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2298664c-b466-4829-bccf-8f5a49efafdb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.057093 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.132801 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" podStartSLOduration=13.13278362 podStartE2EDuration="13.13278362s" podCreationTimestamp="2026-02-02 06:48:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:52.13032206 +0000 UTC m=+157.507589972" watchObservedRunningTime="2026-02-02 06:48:52.13278362 +0000 UTC m=+157.510051532" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.158116 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2298664c-b466-4829-bccf-8f5a49efafdb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2298664c-b466-4829-bccf-8f5a49efafdb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.158167 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2298664c-b466-4829-bccf-8f5a49efafdb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2298664c-b466-4829-bccf-8f5a49efafdb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.158238 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2298664c-b466-4829-bccf-8f5a49efafdb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2298664c-b466-4829-bccf-8f5a49efafdb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.181839 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2298664c-b466-4829-bccf-8f5a49efafdb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2298664c-b466-4829-bccf-8f5a49efafdb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.325738 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.419405 4842 patch_prober.go:28] interesting pod/downloads-7954f5f757-pbtq6 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.419770 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-pbtq6" podUID="cc176201-02a2-46c0-903c-13943d989195" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.419470 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.419422 4842 patch_prober.go:28] interesting pod/downloads-7954f5f757-pbtq6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.421267 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-pbtq6" podUID="cc176201-02a2-46c0-903c-13943d989195" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.421485 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.424598 4842 patch_prober.go:28] interesting pod/console-f9d7485db-kmw8f container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.424642 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-kmw8f" podUID="59990591-2248-489b-bac2-e7cab22482f8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.481469 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fz9q2"] Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.491908 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" Feb 02 06:48:52 crc kubenswrapper[4842]: W0202 06:48:52.505796 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb76f3bc4_4824_422b_a14a_e7cd193ed30d.slice/crio-abf58a7559b9cdd76c76ebedd2333919bb6bc99060b8c1cfc73575fcdd484652 WatchSource:0}: Error finding container abf58a7559b9cdd76c76ebedd2333919bb6bc99060b8c1cfc73575fcdd484652: Status 404 returned error can't find the container with id abf58a7559b9cdd76c76ebedd2333919bb6bc99060b8c1cfc73575fcdd484652 Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.564813 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8ll8\" (UniqueName: \"kubernetes.io/projected/5b43b464-5623-46bb-8097-65b505d08960-kube-api-access-p8ll8\") pod \"5b43b464-5623-46bb-8097-65b505d08960\" (UID: \"5b43b464-5623-46bb-8097-65b505d08960\") " Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.564890 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b43b464-5623-46bb-8097-65b505d08960-config-volume\") pod \"5b43b464-5623-46bb-8097-65b505d08960\" (UID: \"5b43b464-5623-46bb-8097-65b505d08960\") " Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.564994 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b43b464-5623-46bb-8097-65b505d08960-secret-volume\") pod \"5b43b464-5623-46bb-8097-65b505d08960\" (UID: \"5b43b464-5623-46bb-8097-65b505d08960\") " Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.566787 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b43b464-5623-46bb-8097-65b505d08960-config-volume" (OuterVolumeSpecName: "config-volume") pod "5b43b464-5623-46bb-8097-65b505d08960" (UID: "5b43b464-5623-46bb-8097-65b505d08960"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.574428 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b43b464-5623-46bb-8097-65b505d08960-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5b43b464-5623-46bb-8097-65b505d08960" (UID: "5b43b464-5623-46bb-8097-65b505d08960"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.574629 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b43b464-5623-46bb-8097-65b505d08960-kube-api-access-p8ll8" (OuterVolumeSpecName: "kube-api-access-p8ll8") pod "5b43b464-5623-46bb-8097-65b505d08960" (UID: "5b43b464-5623-46bb-8097-65b505d08960"). InnerVolumeSpecName "kube-api-access-p8ll8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.665976 4842 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b43b464-5623-46bb-8097-65b505d08960-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.666005 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8ll8\" (UniqueName: \"kubernetes.io/projected/5b43b464-5623-46bb-8097-65b505d08960-kube-api-access-p8ll8\") on node \"crc\" DevicePath \"\"" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.666016 4842 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b43b464-5623-46bb-8097-65b505d08960-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.732917 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.740901 4842 patch_prober.go:28] interesting pod/router-default-5444994796-j7bfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 06:48:52 crc kubenswrapper[4842]: [-]has-synced failed: reason withheld Feb 02 06:48:52 crc kubenswrapper[4842]: [+]process-running ok Feb 02 06:48:52 crc kubenswrapper[4842]: healthz check failed Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.740938 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j7bfz" podUID="23594203-b17a-4d98-95da-a7c0e3a2ef4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.836457 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" Feb 02 06:48:53 crc kubenswrapper[4842]: I0202 06:48:53.043864 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:48:53 crc kubenswrapper[4842]: I0202 06:48:53.069340 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 06:48:53 crc kubenswrapper[4842]: I0202 06:48:53.113598 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" Feb 02 06:48:53 crc kubenswrapper[4842]: I0202 06:48:53.113731 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" event={"ID":"5b43b464-5623-46bb-8097-65b505d08960","Type":"ContainerDied","Data":"5d47aec119b9bfe1604e8d488d64ba28c81374dd8415db475287c6760b603f34"} Feb 02 06:48:53 crc kubenswrapper[4842]: I0202 06:48:53.113786 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d47aec119b9bfe1604e8d488d64ba28c81374dd8415db475287c6760b603f34" Feb 02 06:48:53 crc kubenswrapper[4842]: I0202 06:48:53.129491 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" event={"ID":"b76f3bc4-4824-422b-a14a-e7cd193ed30d","Type":"ContainerStarted","Data":"c0f1dc5f34d1f80386e6fdb357944d83aa2b47bec8fd128a2011aa5bc422e3b4"} Feb 02 06:48:53 crc kubenswrapper[4842]: I0202 06:48:53.129557 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" event={"ID":"b76f3bc4-4824-422b-a14a-e7cd193ed30d","Type":"ContainerStarted","Data":"abf58a7559b9cdd76c76ebedd2333919bb6bc99060b8c1cfc73575fcdd484652"} Feb 02 06:48:53 crc kubenswrapper[4842]: I0202 06:48:53.131233 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:53 crc kubenswrapper[4842]: I0202 06:48:53.174997 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" podStartSLOduration=137.17498088 podStartE2EDuration="2m17.17498088s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:53.17252038 +0000 UTC m=+158.549788302" watchObservedRunningTime="2026-02-02 06:48:53.17498088 +0000 UTC m=+158.552248782" Feb 02 06:48:53 crc kubenswrapper[4842]: I0202 06:48:53.481083 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 02 06:48:53 crc kubenswrapper[4842]: I0202 06:48:53.735875 4842 patch_prober.go:28] interesting pod/router-default-5444994796-j7bfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 06:48:53 crc kubenswrapper[4842]: [-]has-synced failed: reason withheld Feb 02 06:48:53 crc kubenswrapper[4842]: [+]process-running ok Feb 02 06:48:53 crc kubenswrapper[4842]: healthz check failed Feb 02 06:48:53 crc kubenswrapper[4842]: I0202 06:48:53.735927 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j7bfz" podUID="23594203-b17a-4d98-95da-a7c0e3a2ef4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.147789 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2298664c-b466-4829-bccf-8f5a49efafdb","Type":"ContainerStarted","Data":"9672a6ddab80bc300da97b79bd14e40058a02f19d3a230db5eabe623ded153a0"} Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.231105 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 06:48:54 crc kubenswrapper[4842]: E0202 06:48:54.231368 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b43b464-5623-46bb-8097-65b505d08960" containerName="collect-profiles" Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.231399 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b43b464-5623-46bb-8097-65b505d08960" containerName="collect-profiles" Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.231556 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b43b464-5623-46bb-8097-65b505d08960" containerName="collect-profiles" Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.231952 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.234027 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.234442 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.239012 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.315476 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/012f550e-3c84-45fc-8d26-c49c763e808f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"012f550e-3c84-45fc-8d26-c49c763e808f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.315526 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/012f550e-3c84-45fc-8d26-c49c763e808f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"012f550e-3c84-45fc-8d26-c49c763e808f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.417292 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/012f550e-3c84-45fc-8d26-c49c763e808f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"012f550e-3c84-45fc-8d26-c49c763e808f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.417344 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/012f550e-3c84-45fc-8d26-c49c763e808f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"012f550e-3c84-45fc-8d26-c49c763e808f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.417441 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/012f550e-3c84-45fc-8d26-c49c763e808f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"012f550e-3c84-45fc-8d26-c49c763e808f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.466811 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/012f550e-3c84-45fc-8d26-c49c763e808f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"012f550e-3c84-45fc-8d26-c49c763e808f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.562052 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.736553 4842 patch_prober.go:28] interesting pod/router-default-5444994796-j7bfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 06:48:54 crc kubenswrapper[4842]: [-]has-synced failed: reason withheld Feb 02 06:48:54 crc kubenswrapper[4842]: [+]process-running ok Feb 02 06:48:54 crc kubenswrapper[4842]: healthz check failed Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.736619 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j7bfz" podUID="23594203-b17a-4d98-95da-a7c0e3a2ef4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:48:55 crc kubenswrapper[4842]: I0202 06:48:55.044294 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 06:48:55 crc kubenswrapper[4842]: I0202 06:48:55.157570 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"012f550e-3c84-45fc-8d26-c49c763e808f","Type":"ContainerStarted","Data":"63df2dbe83d771de3ee2390f597aa7eb8663570b98da094b957d600da86a730a"} Feb 02 06:48:55 crc kubenswrapper[4842]: I0202 06:48:55.163592 4842 generic.go:334] "Generic (PLEG): container finished" podID="2298664c-b466-4829-bccf-8f5a49efafdb" containerID="7a12e90bf3e43c95b6e601257b8c111b1524ce5b9f1e59ad387715a73494345a" exitCode=0 Feb 02 06:48:55 crc kubenswrapper[4842]: I0202 06:48:55.163633 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2298664c-b466-4829-bccf-8f5a49efafdb","Type":"ContainerDied","Data":"7a12e90bf3e43c95b6e601257b8c111b1524ce5b9f1e59ad387715a73494345a"} Feb 02 06:48:55 crc kubenswrapper[4842]: I0202 06:48:55.736514 4842 patch_prober.go:28] interesting pod/router-default-5444994796-j7bfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 06:48:55 crc kubenswrapper[4842]: [-]has-synced failed: reason withheld Feb 02 06:48:55 crc kubenswrapper[4842]: [+]process-running ok Feb 02 06:48:55 crc kubenswrapper[4842]: healthz check failed Feb 02 06:48:55 crc kubenswrapper[4842]: I0202 06:48:55.736594 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j7bfz" podUID="23594203-b17a-4d98-95da-a7c0e3a2ef4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:48:56 crc kubenswrapper[4842]: I0202 06:48:56.194307 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"012f550e-3c84-45fc-8d26-c49c763e808f","Type":"ContainerStarted","Data":"57ac07575bb5778011d98303226e4e4e9a167afdaea5a5d819196b7d3fdab21c"} Feb 02 06:48:56 crc kubenswrapper[4842]: I0202 06:48:56.211541 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.211482531 podStartE2EDuration="2.211482531s" podCreationTimestamp="2026-02-02 06:48:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:56.2093807 +0000 UTC m=+161.586648602" watchObservedRunningTime="2026-02-02 06:48:56.211482531 +0000 UTC m=+161.588750443" Feb 02 06:48:56 crc kubenswrapper[4842]: I0202 06:48:56.468405 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 06:48:56 crc kubenswrapper[4842]: I0202 06:48:56.554396 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2298664c-b466-4829-bccf-8f5a49efafdb-kubelet-dir\") pod \"2298664c-b466-4829-bccf-8f5a49efafdb\" (UID: \"2298664c-b466-4829-bccf-8f5a49efafdb\") " Feb 02 06:48:56 crc kubenswrapper[4842]: I0202 06:48:56.554492 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2298664c-b466-4829-bccf-8f5a49efafdb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2298664c-b466-4829-bccf-8f5a49efafdb" (UID: "2298664c-b466-4829-bccf-8f5a49efafdb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:48:56 crc kubenswrapper[4842]: I0202 06:48:56.554588 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2298664c-b466-4829-bccf-8f5a49efafdb-kube-api-access\") pod \"2298664c-b466-4829-bccf-8f5a49efafdb\" (UID: \"2298664c-b466-4829-bccf-8f5a49efafdb\") " Feb 02 06:48:56 crc kubenswrapper[4842]: I0202 06:48:56.554907 4842 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2298664c-b466-4829-bccf-8f5a49efafdb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 06:48:56 crc kubenswrapper[4842]: I0202 06:48:56.562579 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2298664c-b466-4829-bccf-8f5a49efafdb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2298664c-b466-4829-bccf-8f5a49efafdb" (UID: "2298664c-b466-4829-bccf-8f5a49efafdb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:48:56 crc kubenswrapper[4842]: I0202 06:48:56.659893 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2298664c-b466-4829-bccf-8f5a49efafdb-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 06:48:56 crc kubenswrapper[4842]: I0202 06:48:56.735738 4842 patch_prober.go:28] interesting pod/router-default-5444994796-j7bfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 06:48:56 crc kubenswrapper[4842]: [-]has-synced failed: reason withheld Feb 02 06:48:56 crc kubenswrapper[4842]: [+]process-running ok Feb 02 06:48:56 crc kubenswrapper[4842]: healthz check failed Feb 02 06:48:56 crc kubenswrapper[4842]: I0202 06:48:56.735789 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j7bfz" podUID="23594203-b17a-4d98-95da-a7c0e3a2ef4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:48:56 crc kubenswrapper[4842]: I0202 06:48:56.815152 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:56 crc kubenswrapper[4842]: I0202 06:48:56.823473 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:57 crc kubenswrapper[4842]: I0202 06:48:57.215542 4842 generic.go:334] "Generic (PLEG): container finished" podID="012f550e-3c84-45fc-8d26-c49c763e808f" containerID="57ac07575bb5778011d98303226e4e4e9a167afdaea5a5d819196b7d3fdab21c" exitCode=0 Feb 02 06:48:57 crc kubenswrapper[4842]: I0202 06:48:57.215638 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"012f550e-3c84-45fc-8d26-c49c763e808f","Type":"ContainerDied","Data":"57ac07575bb5778011d98303226e4e4e9a167afdaea5a5d819196b7d3fdab21c"} Feb 02 06:48:57 crc kubenswrapper[4842]: I0202 06:48:57.230247 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 06:48:57 crc kubenswrapper[4842]: I0202 06:48:57.238520 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2298664c-b466-4829-bccf-8f5a49efafdb","Type":"ContainerDied","Data":"9672a6ddab80bc300da97b79bd14e40058a02f19d3a230db5eabe623ded153a0"} Feb 02 06:48:57 crc kubenswrapper[4842]: I0202 06:48:57.238594 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9672a6ddab80bc300da97b79bd14e40058a02f19d3a230db5eabe623ded153a0" Feb 02 06:48:57 crc kubenswrapper[4842]: I0202 06:48:57.737038 4842 patch_prober.go:28] interesting pod/router-default-5444994796-j7bfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 06:48:57 crc kubenswrapper[4842]: [-]has-synced failed: reason withheld Feb 02 06:48:57 crc kubenswrapper[4842]: [+]process-running ok Feb 02 06:48:57 crc kubenswrapper[4842]: healthz check failed Feb 02 06:48:57 crc kubenswrapper[4842]: I0202 06:48:57.737104 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j7bfz" podUID="23594203-b17a-4d98-95da-a7c0e3a2ef4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:48:57 crc kubenswrapper[4842]: I0202 06:48:57.952905 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-z2sjd" Feb 02 06:48:58 crc kubenswrapper[4842]: I0202 06:48:58.735700 4842 patch_prober.go:28] interesting pod/router-default-5444994796-j7bfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 06:48:58 crc kubenswrapper[4842]: [-]has-synced failed: reason withheld Feb 02 06:48:58 crc kubenswrapper[4842]: [+]process-running ok Feb 02 06:48:58 crc kubenswrapper[4842]: healthz check failed Feb 02 06:48:58 crc kubenswrapper[4842]: I0202 06:48:58.736015 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j7bfz" podUID="23594203-b17a-4d98-95da-a7c0e3a2ef4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:48:59 crc kubenswrapper[4842]: I0202 06:48:59.305292 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs\") pod \"network-metrics-daemon-9chjr\" (UID: \"4f6c3b51-669c-4c7b-a23a-ed68d139849e\") " pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:59 crc kubenswrapper[4842]: I0202 06:48:59.311065 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs\") pod \"network-metrics-daemon-9chjr\" (UID: \"4f6c3b51-669c-4c7b-a23a-ed68d139849e\") " pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:59 crc kubenswrapper[4842]: I0202 06:48:59.372762 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:59 crc kubenswrapper[4842]: I0202 06:48:59.735426 4842 patch_prober.go:28] interesting pod/router-default-5444994796-j7bfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 06:48:59 crc kubenswrapper[4842]: [-]has-synced failed: reason withheld Feb 02 06:48:59 crc kubenswrapper[4842]: [+]process-running ok Feb 02 06:48:59 crc kubenswrapper[4842]: healthz check failed Feb 02 06:48:59 crc kubenswrapper[4842]: I0202 06:48:59.735488 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j7bfz" podUID="23594203-b17a-4d98-95da-a7c0e3a2ef4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:49:00 crc kubenswrapper[4842]: I0202 06:49:00.736455 4842 patch_prober.go:28] interesting pod/router-default-5444994796-j7bfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 06:49:00 crc kubenswrapper[4842]: [-]has-synced failed: reason withheld Feb 02 06:49:00 crc kubenswrapper[4842]: [+]process-running ok Feb 02 06:49:00 crc kubenswrapper[4842]: healthz check failed Feb 02 06:49:00 crc kubenswrapper[4842]: I0202 06:49:00.736983 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j7bfz" podUID="23594203-b17a-4d98-95da-a7c0e3a2ef4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:49:01 crc kubenswrapper[4842]: I0202 06:49:01.736262 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:49:01 crc kubenswrapper[4842]: I0202 06:49:01.739307 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:49:02 crc kubenswrapper[4842]: I0202 06:49:02.316433 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 06:49:02 crc kubenswrapper[4842]: I0202 06:49:02.361203 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/012f550e-3c84-45fc-8d26-c49c763e808f-kubelet-dir\") pod \"012f550e-3c84-45fc-8d26-c49c763e808f\" (UID: \"012f550e-3c84-45fc-8d26-c49c763e808f\") " Feb 02 06:49:02 crc kubenswrapper[4842]: I0202 06:49:02.361259 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/012f550e-3c84-45fc-8d26-c49c763e808f-kube-api-access\") pod \"012f550e-3c84-45fc-8d26-c49c763e808f\" (UID: \"012f550e-3c84-45fc-8d26-c49c763e808f\") " Feb 02 06:49:02 crc kubenswrapper[4842]: I0202 06:49:02.361406 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/012f550e-3c84-45fc-8d26-c49c763e808f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "012f550e-3c84-45fc-8d26-c49c763e808f" (UID: "012f550e-3c84-45fc-8d26-c49c763e808f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:49:02 crc kubenswrapper[4842]: I0202 06:49:02.362417 4842 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/012f550e-3c84-45fc-8d26-c49c763e808f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:02 crc kubenswrapper[4842]: I0202 06:49:02.368435 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/012f550e-3c84-45fc-8d26-c49c763e808f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "012f550e-3c84-45fc-8d26-c49c763e808f" (UID: "012f550e-3c84-45fc-8d26-c49c763e808f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:49:02 crc kubenswrapper[4842]: I0202 06:49:02.416023 4842 patch_prober.go:28] interesting pod/downloads-7954f5f757-pbtq6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 02 06:49:02 crc kubenswrapper[4842]: I0202 06:49:02.416051 4842 patch_prober.go:28] interesting pod/console-f9d7485db-kmw8f container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 02 06:49:02 crc kubenswrapper[4842]: I0202 06:49:02.416086 4842 patch_prober.go:28] interesting pod/downloads-7954f5f757-pbtq6 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 02 06:49:02 crc kubenswrapper[4842]: I0202 06:49:02.416082 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-pbtq6" podUID="cc176201-02a2-46c0-903c-13943d989195" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 02 06:49:02 crc kubenswrapper[4842]: I0202 06:49:02.416099 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-kmw8f" podUID="59990591-2248-489b-bac2-e7cab22482f8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 02 06:49:02 crc kubenswrapper[4842]: I0202 06:49:02.416138 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-pbtq6" podUID="cc176201-02a2-46c0-903c-13943d989195" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 02 06:49:02 crc kubenswrapper[4842]: I0202 06:49:02.463999 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/012f550e-3c84-45fc-8d26-c49c763e808f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:03 crc kubenswrapper[4842]: I0202 06:49:03.270784 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"012f550e-3c84-45fc-8d26-c49c763e808f","Type":"ContainerDied","Data":"63df2dbe83d771de3ee2390f597aa7eb8663570b98da094b957d600da86a730a"} Feb 02 06:49:03 crc kubenswrapper[4842]: I0202 06:49:03.270812 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 06:49:03 crc kubenswrapper[4842]: I0202 06:49:03.270817 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63df2dbe83d771de3ee2390f597aa7eb8663570b98da094b957d600da86a730a" Feb 02 06:49:04 crc kubenswrapper[4842]: I0202 06:49:04.226557 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m"] Feb 02 06:49:04 crc kubenswrapper[4842]: I0202 06:49:04.226745 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" podUID="3a1b2909-d542-48b0-8729-294f7950ab2d" containerName="route-controller-manager" containerID="cri-o://64198cd4ed9c3f648a83a0d5cc2017b0e62648734deb3f42088a21d4a035b132" gracePeriod=30 Feb 02 06:49:04 crc kubenswrapper[4842]: I0202 06:49:04.250353 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rssw5"] Feb 02 06:49:04 crc kubenswrapper[4842]: I0202 06:49:04.250749 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" podUID="c7352a46-964e-478a-a141-7b1f3d529b85" containerName="controller-manager" containerID="cri-o://ba883d0dbff2f8d72bcfa41bc18c26959b10543f2aee551d9c4325bf6653ef2e" gracePeriod=30 Feb 02 06:49:05 crc kubenswrapper[4842]: I0202 06:49:05.294483 4842 generic.go:334] "Generic (PLEG): container finished" podID="c7352a46-964e-478a-a141-7b1f3d529b85" containerID="ba883d0dbff2f8d72bcfa41bc18c26959b10543f2aee551d9c4325bf6653ef2e" exitCode=0 Feb 02 06:49:05 crc kubenswrapper[4842]: I0202 06:49:05.294601 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" event={"ID":"c7352a46-964e-478a-a141-7b1f3d529b85","Type":"ContainerDied","Data":"ba883d0dbff2f8d72bcfa41bc18c26959b10543f2aee551d9c4325bf6653ef2e"} Feb 02 06:49:05 crc kubenswrapper[4842]: I0202 06:49:05.297343 4842 generic.go:334] "Generic (PLEG): container finished" podID="3a1b2909-d542-48b0-8729-294f7950ab2d" containerID="64198cd4ed9c3f648a83a0d5cc2017b0e62648734deb3f42088a21d4a035b132" exitCode=0 Feb 02 06:49:05 crc kubenswrapper[4842]: I0202 06:49:05.297379 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" event={"ID":"3a1b2909-d542-48b0-8729-294f7950ab2d","Type":"ContainerDied","Data":"64198cd4ed9c3f648a83a0d5cc2017b0e62648734deb3f42088a21d4a035b132"} Feb 02 06:49:12 crc kubenswrapper[4842]: I0202 06:49:12.036272 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:49:12 crc kubenswrapper[4842]: I0202 06:49:12.147109 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 06:49:12 crc kubenswrapper[4842]: I0202 06:49:12.148982 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 06:49:12 crc kubenswrapper[4842]: I0202 06:49:12.424508 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-pbtq6" Feb 02 06:49:12 crc kubenswrapper[4842]: I0202 06:49:12.452756 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:49:12 crc kubenswrapper[4842]: I0202 06:49:12.470751 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:49:12 crc kubenswrapper[4842]: I0202 06:49:12.883014 4842 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-rssw5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 06:49:12 crc kubenswrapper[4842]: I0202 06:49:12.883079 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" podUID="c7352a46-964e-478a-a141-7b1f3d529b85" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 06:49:12 crc kubenswrapper[4842]: I0202 06:49:12.972496 4842 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-brh4m container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 06:49:12 crc kubenswrapper[4842]: I0202 06:49:12.972565 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" podUID="3a1b2909-d542-48b0-8729-294f7950ab2d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 06:49:15 crc kubenswrapper[4842]: E0202 06:49:15.314501 4842 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 02 06:49:15 crc kubenswrapper[4842]: E0202 06:49:15.315165 4842 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7gfrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-5l5m7_openshift-marketplace(99088cf9-5dcc-4837-943b-4deca45c1401): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 06:49:15 crc kubenswrapper[4842]: E0202 06:49:15.316571 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-5l5m7" podUID="99088cf9-5dcc-4837-943b-4deca45c1401" Feb 02 06:49:16 crc kubenswrapper[4842]: E0202 06:49:16.606130 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-5l5m7" podUID="99088cf9-5dcc-4837-943b-4deca45c1401" Feb 02 06:49:16 crc kubenswrapper[4842]: E0202 06:49:16.681798 4842 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 02 06:49:16 crc kubenswrapper[4842]: E0202 06:49:16.682043 4842 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mrqbw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-l9qkz_openshift-marketplace(c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 06:49:16 crc kubenswrapper[4842]: E0202 06:49:16.683363 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-l9qkz" podUID="c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" Feb 02 06:49:16 crc kubenswrapper[4842]: E0202 06:49:16.690294 4842 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 02 06:49:16 crc kubenswrapper[4842]: E0202 06:49:16.690445 4842 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jwfcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-m6ms7_openshift-marketplace(eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 06:49:16 crc kubenswrapper[4842]: E0202 06:49:16.691724 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-m6ms7" podUID="eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.043772 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-m6ms7" podUID="eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.043780 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l9qkz" podUID="c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.122651 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.122974 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.150154 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf"] Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.150363 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a1b2909-d542-48b0-8729-294f7950ab2d" containerName="route-controller-manager" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.150374 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a1b2909-d542-48b0-8729-294f7950ab2d" containerName="route-controller-manager" Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.150387 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7352a46-964e-478a-a141-7b1f3d529b85" containerName="controller-manager" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.150405 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7352a46-964e-478a-a141-7b1f3d529b85" containerName="controller-manager" Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.150412 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="012f550e-3c84-45fc-8d26-c49c763e808f" containerName="pruner" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.150417 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="012f550e-3c84-45fc-8d26-c49c763e808f" containerName="pruner" Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.150424 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2298664c-b466-4829-bccf-8f5a49efafdb" containerName="pruner" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.150430 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="2298664c-b466-4829-bccf-8f5a49efafdb" containerName="pruner" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.150521 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7352a46-964e-478a-a141-7b1f3d529b85" containerName="controller-manager" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.150532 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="2298664c-b466-4829-bccf-8f5a49efafdb" containerName="pruner" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.150538 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a1b2909-d542-48b0-8729-294f7950ab2d" containerName="route-controller-manager" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.150545 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="012f550e-3c84-45fc-8d26-c49c763e808f" containerName="pruner" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.150873 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.174808 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf"] Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.177044 4842 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.177046 4842 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.177153 4842 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q662f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-z5jt7_openshift-marketplace(69e94ec9-2a3b-4f85-a2b7-9e2f07359890): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.177209 4842 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dtcmj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-9mdpt_openshift-marketplace(0401543d-1af2-45fd-a8e1-05cec083bdd7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.178500 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-9mdpt" podUID="0401543d-1af2-45fd-a8e1-05cec083bdd7" Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.178516 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-z5jt7" podUID="69e94ec9-2a3b-4f85-a2b7-9e2f07359890" Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.195210 4842 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.195360 4842 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p8v2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-74vp9_openshift-marketplace(671957e9-c40d-416d-8756-a4d7f0abc317): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.196581 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-74vp9" podUID="671957e9-c40d-416d-8756-a4d7f0abc317" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.204390 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7352a46-964e-478a-a141-7b1f3d529b85-serving-cert\") pod \"c7352a46-964e-478a-a141-7b1f3d529b85\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.204426 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a1b2909-d542-48b0-8729-294f7950ab2d-config\") pod \"3a1b2909-d542-48b0-8729-294f7950ab2d\" (UID: \"3a1b2909-d542-48b0-8729-294f7950ab2d\") " Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.204461 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-proxy-ca-bundles\") pod \"c7352a46-964e-478a-a141-7b1f3d529b85\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.204482 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j2bb\" (UniqueName: \"kubernetes.io/projected/3a1b2909-d542-48b0-8729-294f7950ab2d-kube-api-access-8j2bb\") pod \"3a1b2909-d542-48b0-8729-294f7950ab2d\" (UID: \"3a1b2909-d542-48b0-8729-294f7950ab2d\") " Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.204522 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a1b2909-d542-48b0-8729-294f7950ab2d-serving-cert\") pod \"3a1b2909-d542-48b0-8729-294f7950ab2d\" (UID: \"3a1b2909-d542-48b0-8729-294f7950ab2d\") " Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.204553 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-config\") pod \"c7352a46-964e-478a-a141-7b1f3d529b85\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.204618 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-client-ca\") pod \"c7352a46-964e-478a-a141-7b1f3d529b85\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.204667 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a1b2909-d542-48b0-8729-294f7950ab2d-client-ca\") pod \"3a1b2909-d542-48b0-8729-294f7950ab2d\" (UID: \"3a1b2909-d542-48b0-8729-294f7950ab2d\") " Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.204691 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpp28\" (UniqueName: \"kubernetes.io/projected/c7352a46-964e-478a-a141-7b1f3d529b85-kube-api-access-wpp28\") pod \"c7352a46-964e-478a-a141-7b1f3d529b85\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.204920 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-client-ca\") pod \"route-controller-manager-7966d87dbf-rsdxf\" (UID: \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\") " pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.204964 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l99hq\" (UniqueName: \"kubernetes.io/projected/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-kube-api-access-l99hq\") pod \"route-controller-manager-7966d87dbf-rsdxf\" (UID: \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\") " pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.204997 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-serving-cert\") pod \"route-controller-manager-7966d87dbf-rsdxf\" (UID: \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\") " pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.205020 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-config\") pod \"route-controller-manager-7966d87dbf-rsdxf\" (UID: \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\") " pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.205143 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a1b2909-d542-48b0-8729-294f7950ab2d-config" (OuterVolumeSpecName: "config") pod "3a1b2909-d542-48b0-8729-294f7950ab2d" (UID: "3a1b2909-d542-48b0-8729-294f7950ab2d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.205485 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a1b2909-d542-48b0-8729-294f7950ab2d-client-ca" (OuterVolumeSpecName: "client-ca") pod "3a1b2909-d542-48b0-8729-294f7950ab2d" (UID: "3a1b2909-d542-48b0-8729-294f7950ab2d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.205557 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c7352a46-964e-478a-a141-7b1f3d529b85" (UID: "c7352a46-964e-478a-a141-7b1f3d529b85"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.206522 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-config" (OuterVolumeSpecName: "config") pod "c7352a46-964e-478a-a141-7b1f3d529b85" (UID: "c7352a46-964e-478a-a141-7b1f3d529b85"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.207074 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-client-ca" (OuterVolumeSpecName: "client-ca") pod "c7352a46-964e-478a-a141-7b1f3d529b85" (UID: "c7352a46-964e-478a-a141-7b1f3d529b85"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.212495 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7352a46-964e-478a-a141-7b1f3d529b85-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c7352a46-964e-478a-a141-7b1f3d529b85" (UID: "c7352a46-964e-478a-a141-7b1f3d529b85"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.213947 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a1b2909-d542-48b0-8729-294f7950ab2d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3a1b2909-d542-48b0-8729-294f7950ab2d" (UID: "3a1b2909-d542-48b0-8729-294f7950ab2d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.214694 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7352a46-964e-478a-a141-7b1f3d529b85-kube-api-access-wpp28" (OuterVolumeSpecName: "kube-api-access-wpp28") pod "c7352a46-964e-478a-a141-7b1f3d529b85" (UID: "c7352a46-964e-478a-a141-7b1f3d529b85"). InnerVolumeSpecName "kube-api-access-wpp28". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.224703 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a1b2909-d542-48b0-8729-294f7950ab2d-kube-api-access-8j2bb" (OuterVolumeSpecName: "kube-api-access-8j2bb") pod "3a1b2909-d542-48b0-8729-294f7950ab2d" (UID: "3a1b2909-d542-48b0-8729-294f7950ab2d"). InnerVolumeSpecName "kube-api-access-8j2bb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.305974 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l99hq\" (UniqueName: \"kubernetes.io/projected/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-kube-api-access-l99hq\") pod \"route-controller-manager-7966d87dbf-rsdxf\" (UID: \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\") " pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.306034 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-serving-cert\") pod \"route-controller-manager-7966d87dbf-rsdxf\" (UID: \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\") " pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.306080 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-config\") pod \"route-controller-manager-7966d87dbf-rsdxf\" (UID: \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\") " pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.306137 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-client-ca\") pod \"route-controller-manager-7966d87dbf-rsdxf\" (UID: \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\") " pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.306176 4842 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a1b2909-d542-48b0-8729-294f7950ab2d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.306188 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpp28\" (UniqueName: \"kubernetes.io/projected/c7352a46-964e-478a-a141-7b1f3d529b85-kube-api-access-wpp28\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.306223 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7352a46-964e-478a-a141-7b1f3d529b85-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.306232 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a1b2909-d542-48b0-8729-294f7950ab2d-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.306243 4842 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.306252 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8j2bb\" (UniqueName: \"kubernetes.io/projected/3a1b2909-d542-48b0-8729-294f7950ab2d-kube-api-access-8j2bb\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.306260 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a1b2909-d542-48b0-8729-294f7950ab2d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.306268 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.306292 4842 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.307157 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-client-ca\") pod \"route-controller-manager-7966d87dbf-rsdxf\" (UID: \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\") " pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.307265 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-config\") pod \"route-controller-manager-7966d87dbf-rsdxf\" (UID: \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\") " pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.309883 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-serving-cert\") pod \"route-controller-manager-7966d87dbf-rsdxf\" (UID: \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\") " pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.323427 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l99hq\" (UniqueName: \"kubernetes.io/projected/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-kube-api-access-l99hq\") pod \"route-controller-manager-7966d87dbf-rsdxf\" (UID: \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\") " pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.388819 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2j5m" event={"ID":"de569fea-56ca-4762-9a22-a12561c296b6","Type":"ContainerStarted","Data":"d76e8f3ff3b70f696577be9bac74169cf5aa0f3b5bca4534248c237af1a174ae"} Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.391764 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.391849 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" event={"ID":"c7352a46-964e-478a-a141-7b1f3d529b85","Type":"ContainerDied","Data":"44ebd0c802db6062893241169e4706979097a692764a061e2fde6a02c71197ca"} Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.391906 4842 scope.go:117] "RemoveContainer" containerID="ba883d0dbff2f8d72bcfa41bc18c26959b10543f2aee551d9c4325bf6653ef2e" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.393377 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" event={"ID":"3a1b2909-d542-48b0-8729-294f7950ab2d","Type":"ContainerDied","Data":"643cd1b7543d0a40a6f2280aca5f3b03741bd2063f49a6310b7a1671fc67d3cc"} Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.393392 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.399787 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjfbs" event={"ID":"7be4c568-0aa4-4495-87b0-ec266872eb12","Type":"ContainerStarted","Data":"7631b0b59937c4a2a88980f2a0026660fe847cb4cbe41b4698eeef6e106359e6"} Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.402704 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-74vp9" podUID="671957e9-c40d-416d-8756-a4d7f0abc317" Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.402905 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-z5jt7" podUID="69e94ec9-2a3b-4f85-a2b7-9e2f07359890" Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.404714 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-9mdpt" podUID="0401543d-1af2-45fd-a8e1-05cec083bdd7" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.421008 4842 scope.go:117] "RemoveContainer" containerID="64198cd4ed9c3f648a83a0d5cc2017b0e62648734deb3f42088a21d4a035b132" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.475530 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.499204 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-9chjr"] Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.513938 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m"] Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.515999 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m"] Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.521371 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rssw5"] Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.525239 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rssw5"] Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.890519 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf"] Feb 02 06:49:19 crc kubenswrapper[4842]: I0202 06:49:19.414498 4842 generic.go:334] "Generic (PLEG): container finished" podID="7be4c568-0aa4-4495-87b0-ec266872eb12" containerID="7631b0b59937c4a2a88980f2a0026660fe847cb4cbe41b4698eeef6e106359e6" exitCode=0 Feb 02 06:49:19 crc kubenswrapper[4842]: I0202 06:49:19.414570 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjfbs" event={"ID":"7be4c568-0aa4-4495-87b0-ec266872eb12","Type":"ContainerDied","Data":"7631b0b59937c4a2a88980f2a0026660fe847cb4cbe41b4698eeef6e106359e6"} Feb 02 06:49:19 crc kubenswrapper[4842]: I0202 06:49:19.420950 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9chjr" event={"ID":"4f6c3b51-669c-4c7b-a23a-ed68d139849e","Type":"ContainerStarted","Data":"b486737ddedac7129b1733a35834494a81d73278298468bd753a6886d46b395d"} Feb 02 06:49:19 crc kubenswrapper[4842]: I0202 06:49:19.421016 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9chjr" event={"ID":"4f6c3b51-669c-4c7b-a23a-ed68d139849e","Type":"ContainerStarted","Data":"f49188ca76e1ac3c0015ec96901f860985577da243e613ed7fc520adbafd049c"} Feb 02 06:49:19 crc kubenswrapper[4842]: I0202 06:49:19.421038 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9chjr" event={"ID":"4f6c3b51-669c-4c7b-a23a-ed68d139849e","Type":"ContainerStarted","Data":"d50abf0ae8daa7ec43e532feea59b20a173ab6c4ee290954300cc157f434f3d3"} Feb 02 06:49:19 crc kubenswrapper[4842]: I0202 06:49:19.424487 4842 generic.go:334] "Generic (PLEG): container finished" podID="de569fea-56ca-4762-9a22-a12561c296b6" containerID="d76e8f3ff3b70f696577be9bac74169cf5aa0f3b5bca4534248c237af1a174ae" exitCode=0 Feb 02 06:49:19 crc kubenswrapper[4842]: I0202 06:49:19.424549 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2j5m" event={"ID":"de569fea-56ca-4762-9a22-a12561c296b6","Type":"ContainerDied","Data":"d76e8f3ff3b70f696577be9bac74169cf5aa0f3b5bca4534248c237af1a174ae"} Feb 02 06:49:19 crc kubenswrapper[4842]: I0202 06:49:19.468052 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a1b2909-d542-48b0-8729-294f7950ab2d" path="/var/lib/kubelet/pods/3a1b2909-d542-48b0-8729-294f7950ab2d/volumes" Feb 02 06:49:19 crc kubenswrapper[4842]: I0202 06:49:19.473452 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7352a46-964e-478a-a141-7b1f3d529b85" path="/var/lib/kubelet/pods/c7352a46-964e-478a-a141-7b1f3d529b85/volumes" Feb 02 06:49:19 crc kubenswrapper[4842]: I0202 06:49:19.479071 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" event={"ID":"b8224d52-6c96-4873-a87c-1f9c6ad87bd3","Type":"ContainerStarted","Data":"bab940da589e780495eea930c1901067c60a2e6f9abdefe27a221f39280d831e"} Feb 02 06:49:19 crc kubenswrapper[4842]: I0202 06:49:19.479119 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:19 crc kubenswrapper[4842]: I0202 06:49:19.479137 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" event={"ID":"b8224d52-6c96-4873-a87c-1f9c6ad87bd3","Type":"ContainerStarted","Data":"7354bc8151db2d16116ce4466471dd76aa94cf92497c574dcb299c2e66d9e17c"} Feb 02 06:49:19 crc kubenswrapper[4842]: I0202 06:49:19.490894 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-9chjr" podStartSLOduration=164.49085246 podStartE2EDuration="2m44.49085246s" podCreationTimestamp="2026-02-02 06:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:49:19.472087875 +0000 UTC m=+184.849355847" watchObservedRunningTime="2026-02-02 06:49:19.49085246 +0000 UTC m=+184.868120432" Feb 02 06:49:19 crc kubenswrapper[4842]: I0202 06:49:19.532616 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" podStartSLOduration=15.532587913 podStartE2EDuration="15.532587913s" podCreationTimestamp="2026-02-02 06:49:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:49:19.528801401 +0000 UTC m=+184.906069373" watchObservedRunningTime="2026-02-02 06:49:19.532587913 +0000 UTC m=+184.909855825" Feb 02 06:49:19 crc kubenswrapper[4842]: I0202 06:49:19.571343 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.467640 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2j5m" event={"ID":"de569fea-56ca-4762-9a22-a12561c296b6","Type":"ContainerStarted","Data":"c1ebf104341f1b64aeb385d1323c7703ec3930f4b05b44743081df564666a025"} Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.471928 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjfbs" event={"ID":"7be4c568-0aa4-4495-87b0-ec266872eb12","Type":"ContainerStarted","Data":"e936be960fc6a4acd631d5e4fcc059849d751995376968cab91ef3cd5907201b"} Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.498704 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m2j5m" podStartSLOduration=3.676823826 podStartE2EDuration="32.498688665s" podCreationTimestamp="2026-02-02 06:48:48 +0000 UTC" firstStartedPulling="2026-02-02 06:48:51.009906384 +0000 UTC m=+156.387174296" lastFinishedPulling="2026-02-02 06:49:19.831771223 +0000 UTC m=+185.209039135" observedRunningTime="2026-02-02 06:49:20.495056117 +0000 UTC m=+185.872324049" watchObservedRunningTime="2026-02-02 06:49:20.498688665 +0000 UTC m=+185.875956577" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.518985 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wjfbs" podStartSLOduration=2.657177434 podStartE2EDuration="30.518971077s" podCreationTimestamp="2026-02-02 06:48:50 +0000 UTC" firstStartedPulling="2026-02-02 06:48:52.051773105 +0000 UTC m=+157.429041017" lastFinishedPulling="2026-02-02 06:49:19.913566748 +0000 UTC m=+185.290834660" observedRunningTime="2026-02-02 06:49:20.515682008 +0000 UTC m=+185.892949930" watchObservedRunningTime="2026-02-02 06:49:20.518971077 +0000 UTC m=+185.896238989" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.672863 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-547cbbd8cb-cglf6"] Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.673632 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.678397 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.678574 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.679044 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.679059 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.679358 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.680765 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.688242 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.690564 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-547cbbd8cb-cglf6"] Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.739050 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.739247 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.740346 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-proxy-ca-bundles\") pod \"controller-manager-547cbbd8cb-cglf6\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.740436 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grxxf\" (UniqueName: \"kubernetes.io/projected/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-kube-api-access-grxxf\") pod \"controller-manager-547cbbd8cb-cglf6\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.740477 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-config\") pod \"controller-manager-547cbbd8cb-cglf6\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.740504 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-client-ca\") pod \"controller-manager-547cbbd8cb-cglf6\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.740552 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-serving-cert\") pod \"controller-manager-547cbbd8cb-cglf6\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.841522 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-serving-cert\") pod \"controller-manager-547cbbd8cb-cglf6\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.841578 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-proxy-ca-bundles\") pod \"controller-manager-547cbbd8cb-cglf6\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.841650 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grxxf\" (UniqueName: \"kubernetes.io/projected/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-kube-api-access-grxxf\") pod \"controller-manager-547cbbd8cb-cglf6\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.841695 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-config\") pod \"controller-manager-547cbbd8cb-cglf6\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.841720 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-client-ca\") pod \"controller-manager-547cbbd8cb-cglf6\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.843562 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-client-ca\") pod \"controller-manager-547cbbd8cb-cglf6\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.843612 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-proxy-ca-bundles\") pod \"controller-manager-547cbbd8cb-cglf6\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.843614 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-config\") pod \"controller-manager-547cbbd8cb-cglf6\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.854943 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-serving-cert\") pod \"controller-manager-547cbbd8cb-cglf6\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.869511 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grxxf\" (UniqueName: \"kubernetes.io/projected/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-kube-api-access-grxxf\") pod \"controller-manager-547cbbd8cb-cglf6\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:21 crc kubenswrapper[4842]: I0202 06:49:21.044599 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:21 crc kubenswrapper[4842]: I0202 06:49:21.480975 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-547cbbd8cb-cglf6"] Feb 02 06:49:21 crc kubenswrapper[4842]: W0202 06:49:21.490160 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03b2bbfb_c88f_4cbc_b071_c6275ae02a03.slice/crio-ed008bd3a381e1a2b8c2b4471536e3df5c7ceab67537fb81461cedd0605070b1 WatchSource:0}: Error finding container ed008bd3a381e1a2b8c2b4471536e3df5c7ceab67537fb81461cedd0605070b1: Status 404 returned error can't find the container with id ed008bd3a381e1a2b8c2b4471536e3df5c7ceab67537fb81461cedd0605070b1 Feb 02 06:49:21 crc kubenswrapper[4842]: I0202 06:49:21.995431 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wjfbs" podUID="7be4c568-0aa4-4495-87b0-ec266872eb12" containerName="registry-server" probeResult="failure" output=< Feb 02 06:49:21 crc kubenswrapper[4842]: timeout: failed to connect service ":50051" within 1s Feb 02 06:49:21 crc kubenswrapper[4842]: > Feb 02 06:49:22 crc kubenswrapper[4842]: I0202 06:49:22.485108 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" event={"ID":"03b2bbfb-c88f-4cbc-b071-c6275ae02a03","Type":"ContainerStarted","Data":"4d79b5384b066d0a78742e3704ce1509026469f4826ef043da30720693d3be6e"} Feb 02 06:49:22 crc kubenswrapper[4842]: I0202 06:49:22.485158 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" event={"ID":"03b2bbfb-c88f-4cbc-b071-c6275ae02a03","Type":"ContainerStarted","Data":"ed008bd3a381e1a2b8c2b4471536e3df5c7ceab67537fb81461cedd0605070b1"} Feb 02 06:49:22 crc kubenswrapper[4842]: I0202 06:49:22.504719 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" podStartSLOduration=18.504702032 podStartE2EDuration="18.504702032s" podCreationTimestamp="2026-02-02 06:49:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:49:22.502894778 +0000 UTC m=+187.880162700" watchObservedRunningTime="2026-02-02 06:49:22.504702032 +0000 UTC m=+187.881969944" Feb 02 06:49:22 crc kubenswrapper[4842]: I0202 06:49:22.824964 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j" Feb 02 06:49:23 crc kubenswrapper[4842]: I0202 06:49:23.490570 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:23 crc kubenswrapper[4842]: I0202 06:49:23.496505 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:23 crc kubenswrapper[4842]: I0202 06:49:23.768906 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.168393 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-547cbbd8cb-cglf6"] Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.259360 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf"] Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.259666 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" podUID="b8224d52-6c96-4873-a87c-1f9c6ad87bd3" containerName="route-controller-manager" containerID="cri-o://bab940da589e780495eea930c1901067c60a2e6f9abdefe27a221f39280d831e" gracePeriod=30 Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.501653 4842 generic.go:334] "Generic (PLEG): container finished" podID="b8224d52-6c96-4873-a87c-1f9c6ad87bd3" containerID="bab940da589e780495eea930c1901067c60a2e6f9abdefe27a221f39280d831e" exitCode=0 Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.501739 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" event={"ID":"b8224d52-6c96-4873-a87c-1f9c6ad87bd3","Type":"ContainerDied","Data":"bab940da589e780495eea930c1901067c60a2e6f9abdefe27a221f39280d831e"} Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.658660 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.792929 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-client-ca\") pod \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\" (UID: \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\") " Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.793083 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l99hq\" (UniqueName: \"kubernetes.io/projected/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-kube-api-access-l99hq\") pod \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\" (UID: \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\") " Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.793142 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-config\") pod \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\" (UID: \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\") " Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.793180 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-serving-cert\") pod \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\" (UID: \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\") " Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.793851 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-client-ca" (OuterVolumeSpecName: "client-ca") pod "b8224d52-6c96-4873-a87c-1f9c6ad87bd3" (UID: "b8224d52-6c96-4873-a87c-1f9c6ad87bd3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.794027 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-config" (OuterVolumeSpecName: "config") pod "b8224d52-6c96-4873-a87c-1f9c6ad87bd3" (UID: "b8224d52-6c96-4873-a87c-1f9c6ad87bd3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.801026 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b8224d52-6c96-4873-a87c-1f9c6ad87bd3" (UID: "b8224d52-6c96-4873-a87c-1f9c6ad87bd3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.805707 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-kube-api-access-l99hq" (OuterVolumeSpecName: "kube-api-access-l99hq") pod "b8224d52-6c96-4873-a87c-1f9c6ad87bd3" (UID: "b8224d52-6c96-4873-a87c-1f9c6ad87bd3"). InnerVolumeSpecName "kube-api-access-l99hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.895056 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.895108 4842 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.895124 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l99hq\" (UniqueName: \"kubernetes.io/projected/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-kube-api-access-l99hq\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.895170 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.520368 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" event={"ID":"b8224d52-6c96-4873-a87c-1f9c6ad87bd3","Type":"ContainerDied","Data":"7354bc8151db2d16116ce4466471dd76aa94cf92497c574dcb299c2e66d9e17c"} Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.520426 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.520453 4842 scope.go:117] "RemoveContainer" containerID="bab940da589e780495eea930c1901067c60a2e6f9abdefe27a221f39280d831e" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.520587 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" podUID="03b2bbfb-c88f-4cbc-b071-c6275ae02a03" containerName="controller-manager" containerID="cri-o://4d79b5384b066d0a78742e3704ce1509026469f4826ef043da30720693d3be6e" gracePeriod=30 Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.558483 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf"] Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.563625 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf"] Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.673752 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df"] Feb 02 06:49:25 crc kubenswrapper[4842]: E0202 06:49:25.674464 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8224d52-6c96-4873-a87c-1f9c6ad87bd3" containerName="route-controller-manager" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.674544 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8224d52-6c96-4873-a87c-1f9c6ad87bd3" containerName="route-controller-manager" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.674706 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8224d52-6c96-4873-a87c-1f9c6ad87bd3" containerName="route-controller-manager" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.675154 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.679403 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.679428 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.681920 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.681940 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.682105 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.682183 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.686493 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df"] Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.830304 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrbqw\" (UniqueName: \"kubernetes.io/projected/2cd1f864-6b9b-4113-b65e-446049b9af92-kube-api-access-jrbqw\") pod \"route-controller-manager-68654ddbd-nd2df\" (UID: \"2cd1f864-6b9b-4113-b65e-446049b9af92\") " pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.830361 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cd1f864-6b9b-4113-b65e-446049b9af92-serving-cert\") pod \"route-controller-manager-68654ddbd-nd2df\" (UID: \"2cd1f864-6b9b-4113-b65e-446049b9af92\") " pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.830407 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cd1f864-6b9b-4113-b65e-446049b9af92-client-ca\") pod \"route-controller-manager-68654ddbd-nd2df\" (UID: \"2cd1f864-6b9b-4113-b65e-446049b9af92\") " pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.830454 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cd1f864-6b9b-4113-b65e-446049b9af92-config\") pod \"route-controller-manager-68654ddbd-nd2df\" (UID: \"2cd1f864-6b9b-4113-b65e-446049b9af92\") " pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.931318 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cd1f864-6b9b-4113-b65e-446049b9af92-config\") pod \"route-controller-manager-68654ddbd-nd2df\" (UID: \"2cd1f864-6b9b-4113-b65e-446049b9af92\") " pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.932725 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cd1f864-6b9b-4113-b65e-446049b9af92-config\") pod \"route-controller-manager-68654ddbd-nd2df\" (UID: \"2cd1f864-6b9b-4113-b65e-446049b9af92\") " pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.931371 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrbqw\" (UniqueName: \"kubernetes.io/projected/2cd1f864-6b9b-4113-b65e-446049b9af92-kube-api-access-jrbqw\") pod \"route-controller-manager-68654ddbd-nd2df\" (UID: \"2cd1f864-6b9b-4113-b65e-446049b9af92\") " pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.932817 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cd1f864-6b9b-4113-b65e-446049b9af92-serving-cert\") pod \"route-controller-manager-68654ddbd-nd2df\" (UID: \"2cd1f864-6b9b-4113-b65e-446049b9af92\") " pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.932861 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cd1f864-6b9b-4113-b65e-446049b9af92-client-ca\") pod \"route-controller-manager-68654ddbd-nd2df\" (UID: \"2cd1f864-6b9b-4113-b65e-446049b9af92\") " pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.935285 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cd1f864-6b9b-4113-b65e-446049b9af92-client-ca\") pod \"route-controller-manager-68654ddbd-nd2df\" (UID: \"2cd1f864-6b9b-4113-b65e-446049b9af92\") " pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.944838 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cd1f864-6b9b-4113-b65e-446049b9af92-serving-cert\") pod \"route-controller-manager-68654ddbd-nd2df\" (UID: \"2cd1f864-6b9b-4113-b65e-446049b9af92\") " pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.947835 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrbqw\" (UniqueName: \"kubernetes.io/projected/2cd1f864-6b9b-4113-b65e-446049b9af92-kube-api-access-jrbqw\") pod \"route-controller-manager-68654ddbd-nd2df\" (UID: \"2cd1f864-6b9b-4113-b65e-446049b9af92\") " pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.001519 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.033123 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-proxy-ca-bundles\") pod \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.033179 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-serving-cert\") pod \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.033276 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grxxf\" (UniqueName: \"kubernetes.io/projected/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-kube-api-access-grxxf\") pod \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.033298 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-config\") pod \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.033338 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-client-ca\") pod \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.034013 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-client-ca" (OuterVolumeSpecName: "client-ca") pod "03b2bbfb-c88f-4cbc-b071-c6275ae02a03" (UID: "03b2bbfb-c88f-4cbc-b071-c6275ae02a03"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.034044 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "03b2bbfb-c88f-4cbc-b071-c6275ae02a03" (UID: "03b2bbfb-c88f-4cbc-b071-c6275ae02a03"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.034497 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-config" (OuterVolumeSpecName: "config") pod "03b2bbfb-c88f-4cbc-b071-c6275ae02a03" (UID: "03b2bbfb-c88f-4cbc-b071-c6275ae02a03"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.036800 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-kube-api-access-grxxf" (OuterVolumeSpecName: "kube-api-access-grxxf") pod "03b2bbfb-c88f-4cbc-b071-c6275ae02a03" (UID: "03b2bbfb-c88f-4cbc-b071-c6275ae02a03"). InnerVolumeSpecName "kube-api-access-grxxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.036981 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "03b2bbfb-c88f-4cbc-b071-c6275ae02a03" (UID: "03b2bbfb-c88f-4cbc-b071-c6275ae02a03"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.040365 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.134359 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grxxf\" (UniqueName: \"kubernetes.io/projected/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-kube-api-access-grxxf\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.134384 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.134393 4842 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.134405 4842 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.134414 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.223484 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df"] Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.528499 4842 generic.go:334] "Generic (PLEG): container finished" podID="03b2bbfb-c88f-4cbc-b071-c6275ae02a03" containerID="4d79b5384b066d0a78742e3704ce1509026469f4826ef043da30720693d3be6e" exitCode=0 Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.528563 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.528579 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" event={"ID":"03b2bbfb-c88f-4cbc-b071-c6275ae02a03","Type":"ContainerDied","Data":"4d79b5384b066d0a78742e3704ce1509026469f4826ef043da30720693d3be6e"} Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.529120 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" event={"ID":"03b2bbfb-c88f-4cbc-b071-c6275ae02a03","Type":"ContainerDied","Data":"ed008bd3a381e1a2b8c2b4471536e3df5c7ceab67537fb81461cedd0605070b1"} Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.529159 4842 scope.go:117] "RemoveContainer" containerID="4d79b5384b066d0a78742e3704ce1509026469f4826ef043da30720693d3be6e" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.532946 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" event={"ID":"2cd1f864-6b9b-4113-b65e-446049b9af92","Type":"ContainerStarted","Data":"55e75296f0e6047802f588fbbf9926e666199b348dea699c186a87607d8698c7"} Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.532996 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" event={"ID":"2cd1f864-6b9b-4113-b65e-446049b9af92","Type":"ContainerStarted","Data":"0429779ecc8d7f354927858d9f829de9c008478a695454154ec2b53a1da0abb2"} Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.533139 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.548380 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" podStartSLOduration=2.5483590830000002 podStartE2EDuration="2.548359083s" podCreationTimestamp="2026-02-02 06:49:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:49:26.54781589 +0000 UTC m=+191.925083802" watchObservedRunningTime="2026-02-02 06:49:26.548359083 +0000 UTC m=+191.925626995" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.558402 4842 scope.go:117] "RemoveContainer" containerID="4d79b5384b066d0a78742e3704ce1509026469f4826ef043da30720693d3be6e" Feb 02 06:49:26 crc kubenswrapper[4842]: E0202 06:49:26.558978 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d79b5384b066d0a78742e3704ce1509026469f4826ef043da30720693d3be6e\": container with ID starting with 4d79b5384b066d0a78742e3704ce1509026469f4826ef043da30720693d3be6e not found: ID does not exist" containerID="4d79b5384b066d0a78742e3704ce1509026469f4826ef043da30720693d3be6e" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.559020 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d79b5384b066d0a78742e3704ce1509026469f4826ef043da30720693d3be6e"} err="failed to get container status \"4d79b5384b066d0a78742e3704ce1509026469f4826ef043da30720693d3be6e\": rpc error: code = NotFound desc = could not find container \"4d79b5384b066d0a78742e3704ce1509026469f4826ef043da30720693d3be6e\": container with ID starting with 4d79b5384b066d0a78742e3704ce1509026469f4826ef043da30720693d3be6e not found: ID does not exist" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.575960 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-547cbbd8cb-cglf6"] Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.581344 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-547cbbd8cb-cglf6"] Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.910951 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.444998 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03b2bbfb-c88f-4cbc-b071-c6275ae02a03" path="/var/lib/kubelet/pods/03b2bbfb-c88f-4cbc-b071-c6275ae02a03/volumes" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.446756 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8224d52-6c96-4873-a87c-1f9c6ad87bd3" path="/var/lib/kubelet/pods/b8224d52-6c96-4873-a87c-1f9c6ad87bd3/volumes" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.687396 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-99f997678-95hv6"] Feb 02 06:49:27 crc kubenswrapper[4842]: E0202 06:49:27.687638 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03b2bbfb-c88f-4cbc-b071-c6275ae02a03" containerName="controller-manager" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.687652 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="03b2bbfb-c88f-4cbc-b071-c6275ae02a03" containerName="controller-manager" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.687781 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="03b2bbfb-c88f-4cbc-b071-c6275ae02a03" containerName="controller-manager" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.688182 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.692434 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.693025 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.693758 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.694203 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.694365 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.696404 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-99f997678-95hv6"] Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.700540 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.707064 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.861438 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-proxy-ca-bundles\") pod \"controller-manager-99f997678-95hv6\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.861491 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-serving-cert\") pod \"controller-manager-99f997678-95hv6\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.861524 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gr2f\" (UniqueName: \"kubernetes.io/projected/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-kube-api-access-4gr2f\") pod \"controller-manager-99f997678-95hv6\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.861664 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-client-ca\") pod \"controller-manager-99f997678-95hv6\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.861732 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-config\") pod \"controller-manager-99f997678-95hv6\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.962642 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-client-ca\") pod \"controller-manager-99f997678-95hv6\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.962725 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-config\") pod \"controller-manager-99f997678-95hv6\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.962758 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-proxy-ca-bundles\") pod \"controller-manager-99f997678-95hv6\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.962786 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-serving-cert\") pod \"controller-manager-99f997678-95hv6\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.962811 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gr2f\" (UniqueName: \"kubernetes.io/projected/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-kube-api-access-4gr2f\") pod \"controller-manager-99f997678-95hv6\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.963943 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-config\") pod \"controller-manager-99f997678-95hv6\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.964600 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-proxy-ca-bundles\") pod \"controller-manager-99f997678-95hv6\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.965368 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-client-ca\") pod \"controller-manager-99f997678-95hv6\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.971367 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-serving-cert\") pod \"controller-manager-99f997678-95hv6\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.978753 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gr2f\" (UniqueName: \"kubernetes.io/projected/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-kube-api-access-4gr2f\") pod \"controller-manager-99f997678-95hv6\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:28 crc kubenswrapper[4842]: I0202 06:49:28.013775 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:28 crc kubenswrapper[4842]: I0202 06:49:28.244504 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-99f997678-95hv6"] Feb 02 06:49:28 crc kubenswrapper[4842]: W0202 06:49:28.253735 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b226528_cbee_4e1b_a63a_2e9cb152a9a5.slice/crio-ff5feb05e1f6a299dda4671dfa6361e0b820e5dc062a808b595cb6a3638ecd2f WatchSource:0}: Error finding container ff5feb05e1f6a299dda4671dfa6361e0b820e5dc062a808b595cb6a3638ecd2f: Status 404 returned error can't find the container with id ff5feb05e1f6a299dda4671dfa6361e0b820e5dc062a808b595cb6a3638ecd2f Feb 02 06:49:28 crc kubenswrapper[4842]: I0202 06:49:28.546917 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-99f997678-95hv6" event={"ID":"0b226528-cbee-4e1b-a63a-2e9cb152a9a5","Type":"ContainerStarted","Data":"460312f0fdda5f4c6106f8723d73d45f294eafbd8190af71f258393d8fc703a6"} Feb 02 06:49:28 crc kubenswrapper[4842]: I0202 06:49:28.547284 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-99f997678-95hv6" event={"ID":"0b226528-cbee-4e1b-a63a-2e9cb152a9a5","Type":"ContainerStarted","Data":"ff5feb05e1f6a299dda4671dfa6361e0b820e5dc062a808b595cb6a3638ecd2f"} Feb 02 06:49:28 crc kubenswrapper[4842]: I0202 06:49:28.547306 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:28 crc kubenswrapper[4842]: I0202 06:49:28.556066 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:28 crc kubenswrapper[4842]: I0202 06:49:28.570255 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-99f997678-95hv6" podStartSLOduration=4.570233764 podStartE2EDuration="4.570233764s" podCreationTimestamp="2026-02-02 06:49:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:49:28.565649132 +0000 UTC m=+193.942917054" watchObservedRunningTime="2026-02-02 06:49:28.570233764 +0000 UTC m=+193.947501676" Feb 02 06:49:28 crc kubenswrapper[4842]: I0202 06:49:28.805104 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:49:28 crc kubenswrapper[4842]: I0202 06:49:28.805260 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:49:28 crc kubenswrapper[4842]: I0202 06:49:28.854257 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:49:29 crc kubenswrapper[4842]: I0202 06:49:29.613269 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:49:30 crc kubenswrapper[4842]: I0202 06:49:30.434748 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 06:49:30 crc kubenswrapper[4842]: I0202 06:49:30.435609 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 06:49:30 crc kubenswrapper[4842]: I0202 06:49:30.438563 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 06:49:30 crc kubenswrapper[4842]: I0202 06:49:30.483241 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 06:49:30 crc kubenswrapper[4842]: I0202 06:49:30.483300 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 06:49:30 crc kubenswrapper[4842]: I0202 06:49:30.608898 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cedde76f-459c-4b6b-8535-407c5e392ae7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cedde76f-459c-4b6b-8535-407c5e392ae7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 06:49:30 crc kubenswrapper[4842]: I0202 06:49:30.609027 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cedde76f-459c-4b6b-8535-407c5e392ae7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cedde76f-459c-4b6b-8535-407c5e392ae7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 06:49:30 crc kubenswrapper[4842]: I0202 06:49:30.691417 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hj5sv"] Feb 02 06:49:30 crc kubenswrapper[4842]: I0202 06:49:30.709889 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cedde76f-459c-4b6b-8535-407c5e392ae7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cedde76f-459c-4b6b-8535-407c5e392ae7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 06:49:30 crc kubenswrapper[4842]: I0202 06:49:30.709962 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cedde76f-459c-4b6b-8535-407c5e392ae7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cedde76f-459c-4b6b-8535-407c5e392ae7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 06:49:30 crc kubenswrapper[4842]: I0202 06:49:30.710040 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cedde76f-459c-4b6b-8535-407c5e392ae7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cedde76f-459c-4b6b-8535-407c5e392ae7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 06:49:30 crc kubenswrapper[4842]: I0202 06:49:30.751685 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cedde76f-459c-4b6b-8535-407c5e392ae7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cedde76f-459c-4b6b-8535-407c5e392ae7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 06:49:30 crc kubenswrapper[4842]: I0202 06:49:30.777666 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:49:30 crc kubenswrapper[4842]: I0202 06:49:30.791179 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 06:49:30 crc kubenswrapper[4842]: I0202 06:49:30.835522 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:49:31 crc kubenswrapper[4842]: I0202 06:49:31.336249 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 06:49:31 crc kubenswrapper[4842]: W0202 06:49:31.340206 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podcedde76f_459c_4b6b_8535_407c5e392ae7.slice/crio-a5a41fed2e4b794d72cb0daf4150c5e8b6c1d27aef982c793474fc7005b5b1b4 WatchSource:0}: Error finding container a5a41fed2e4b794d72cb0daf4150c5e8b6c1d27aef982c793474fc7005b5b1b4: Status 404 returned error can't find the container with id a5a41fed2e4b794d72cb0daf4150c5e8b6c1d27aef982c793474fc7005b5b1b4 Feb 02 06:49:31 crc kubenswrapper[4842]: I0202 06:49:31.571808 4842 generic.go:334] "Generic (PLEG): container finished" podID="eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" containerID="df039b89a3cc566c5bb891b0ad1811eb0ba3b5b7e84a10777cf32c394169a4ca" exitCode=0 Feb 02 06:49:31 crc kubenswrapper[4842]: I0202 06:49:31.571877 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m6ms7" event={"ID":"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb","Type":"ContainerDied","Data":"df039b89a3cc566c5bb891b0ad1811eb0ba3b5b7e84a10777cf32c394169a4ca"} Feb 02 06:49:31 crc kubenswrapper[4842]: I0202 06:49:31.575807 4842 generic.go:334] "Generic (PLEG): container finished" podID="0401543d-1af2-45fd-a8e1-05cec083bdd7" containerID="eaf9d6c021e806051d6b0ac858b58d93cb7766dc6129686409ffda36e557eccd" exitCode=0 Feb 02 06:49:31 crc kubenswrapper[4842]: I0202 06:49:31.575865 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9mdpt" event={"ID":"0401543d-1af2-45fd-a8e1-05cec083bdd7","Type":"ContainerDied","Data":"eaf9d6c021e806051d6b0ac858b58d93cb7766dc6129686409ffda36e557eccd"} Feb 02 06:49:31 crc kubenswrapper[4842]: I0202 06:49:31.578521 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cedde76f-459c-4b6b-8535-407c5e392ae7","Type":"ContainerStarted","Data":"a5a41fed2e4b794d72cb0daf4150c5e8b6c1d27aef982c793474fc7005b5b1b4"} Feb 02 06:49:32 crc kubenswrapper[4842]: I0202 06:49:32.585649 4842 generic.go:334] "Generic (PLEG): container finished" podID="cedde76f-459c-4b6b-8535-407c5e392ae7" containerID="7d3b218c1e52bef522f13c85d510d4be2ae307bc8a91ffd26af387612387100e" exitCode=0 Feb 02 06:49:32 crc kubenswrapper[4842]: I0202 06:49:32.585713 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cedde76f-459c-4b6b-8535-407c5e392ae7","Type":"ContainerDied","Data":"7d3b218c1e52bef522f13c85d510d4be2ae307bc8a91ffd26af387612387100e"} Feb 02 06:49:32 crc kubenswrapper[4842]: I0202 06:49:32.588984 4842 generic.go:334] "Generic (PLEG): container finished" podID="671957e9-c40d-416d-8756-a4d7f0abc317" containerID="e91b403fa46440a27510eeae00f55f43951f4cf12111dd68ea6cfd1f20c38551" exitCode=0 Feb 02 06:49:32 crc kubenswrapper[4842]: I0202 06:49:32.589062 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74vp9" event={"ID":"671957e9-c40d-416d-8756-a4d7f0abc317","Type":"ContainerDied","Data":"e91b403fa46440a27510eeae00f55f43951f4cf12111dd68ea6cfd1f20c38551"} Feb 02 06:49:32 crc kubenswrapper[4842]: I0202 06:49:32.590803 4842 generic.go:334] "Generic (PLEG): container finished" podID="99088cf9-5dcc-4837-943b-4deca45c1401" containerID="6a2e8fb4961b678938d98e90622e1cbdba67d44fcb1494b89358728417072d41" exitCode=0 Feb 02 06:49:32 crc kubenswrapper[4842]: I0202 06:49:32.590864 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5l5m7" event={"ID":"99088cf9-5dcc-4837-943b-4deca45c1401","Type":"ContainerDied","Data":"6a2e8fb4961b678938d98e90622e1cbdba67d44fcb1494b89358728417072d41"} Feb 02 06:49:32 crc kubenswrapper[4842]: I0202 06:49:32.597783 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m6ms7" event={"ID":"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb","Type":"ContainerStarted","Data":"f5fe3ff29a99306622ed83546bc7f2e5eae5880c68b19bacf3a85ef4ebbe4489"} Feb 02 06:49:32 crc kubenswrapper[4842]: I0202 06:49:32.600441 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9mdpt" event={"ID":"0401543d-1af2-45fd-a8e1-05cec083bdd7","Type":"ContainerStarted","Data":"78e9529b82e73aa19433041fe4d23066cbcbc288f5d51f46315d8056d17cf0f6"} Feb 02 06:49:32 crc kubenswrapper[4842]: I0202 06:49:32.652902 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m6ms7" podStartSLOduration=3.5974510950000003 podStartE2EDuration="44.652883216s" podCreationTimestamp="2026-02-02 06:48:48 +0000 UTC" firstStartedPulling="2026-02-02 06:48:50.977085078 +0000 UTC m=+156.354352990" lastFinishedPulling="2026-02-02 06:49:32.032517199 +0000 UTC m=+197.409785111" observedRunningTime="2026-02-02 06:49:32.652082556 +0000 UTC m=+198.029350468" watchObservedRunningTime="2026-02-02 06:49:32.652883216 +0000 UTC m=+198.030151128" Feb 02 06:49:32 crc kubenswrapper[4842]: I0202 06:49:32.669510 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9mdpt" podStartSLOduration=5.581205089 podStartE2EDuration="46.66949272s" podCreationTimestamp="2026-02-02 06:48:46 +0000 UTC" firstStartedPulling="2026-02-02 06:48:50.982042998 +0000 UTC m=+156.359310910" lastFinishedPulling="2026-02-02 06:49:32.070330619 +0000 UTC m=+197.447598541" observedRunningTime="2026-02-02 06:49:32.665979284 +0000 UTC m=+198.043247206" watchObservedRunningTime="2026-02-02 06:49:32.66949272 +0000 UTC m=+198.046760632" Feb 02 06:49:32 crc kubenswrapper[4842]: I0202 06:49:32.681983 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wjfbs"] Feb 02 06:49:32 crc kubenswrapper[4842]: I0202 06:49:32.682311 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wjfbs" podUID="7be4c568-0aa4-4495-87b0-ec266872eb12" containerName="registry-server" containerID="cri-o://e936be960fc6a4acd631d5e4fcc059849d751995376968cab91ef3cd5907201b" gracePeriod=2 Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.162695 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.347969 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be4c568-0aa4-4495-87b0-ec266872eb12-catalog-content\") pod \"7be4c568-0aa4-4495-87b0-ec266872eb12\" (UID: \"7be4c568-0aa4-4495-87b0-ec266872eb12\") " Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.348105 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be4c568-0aa4-4495-87b0-ec266872eb12-utilities\") pod \"7be4c568-0aa4-4495-87b0-ec266872eb12\" (UID: \"7be4c568-0aa4-4495-87b0-ec266872eb12\") " Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.348157 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zgw2\" (UniqueName: \"kubernetes.io/projected/7be4c568-0aa4-4495-87b0-ec266872eb12-kube-api-access-8zgw2\") pod \"7be4c568-0aa4-4495-87b0-ec266872eb12\" (UID: \"7be4c568-0aa4-4495-87b0-ec266872eb12\") " Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.350094 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7be4c568-0aa4-4495-87b0-ec266872eb12-utilities" (OuterVolumeSpecName: "utilities") pod "7be4c568-0aa4-4495-87b0-ec266872eb12" (UID: "7be4c568-0aa4-4495-87b0-ec266872eb12"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.353379 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7be4c568-0aa4-4495-87b0-ec266872eb12-kube-api-access-8zgw2" (OuterVolumeSpecName: "kube-api-access-8zgw2") pod "7be4c568-0aa4-4495-87b0-ec266872eb12" (UID: "7be4c568-0aa4-4495-87b0-ec266872eb12"). InnerVolumeSpecName "kube-api-access-8zgw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.452274 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be4c568-0aa4-4495-87b0-ec266872eb12-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.452312 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zgw2\" (UniqueName: \"kubernetes.io/projected/7be4c568-0aa4-4495-87b0-ec266872eb12-kube-api-access-8zgw2\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.482428 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7be4c568-0aa4-4495-87b0-ec266872eb12-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7be4c568-0aa4-4495-87b0-ec266872eb12" (UID: "7be4c568-0aa4-4495-87b0-ec266872eb12"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.553257 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be4c568-0aa4-4495-87b0-ec266872eb12-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.607312 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74vp9" event={"ID":"671957e9-c40d-416d-8756-a4d7f0abc317","Type":"ContainerStarted","Data":"6d298e427c89cc0e226b9524675d73810802c4e0496cc96fde4fe468577994ca"} Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.609302 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5l5m7" event={"ID":"99088cf9-5dcc-4837-943b-4deca45c1401","Type":"ContainerStarted","Data":"d50c37c1b7039a80441e89dbdfb8b545c69d2e2508f8a898b31ac557a8166b6a"} Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.610994 4842 generic.go:334] "Generic (PLEG): container finished" podID="7be4c568-0aa4-4495-87b0-ec266872eb12" containerID="e936be960fc6a4acd631d5e4fcc059849d751995376968cab91ef3cd5907201b" exitCode=0 Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.611142 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.611149 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjfbs" event={"ID":"7be4c568-0aa4-4495-87b0-ec266872eb12","Type":"ContainerDied","Data":"e936be960fc6a4acd631d5e4fcc059849d751995376968cab91ef3cd5907201b"} Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.611375 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjfbs" event={"ID":"7be4c568-0aa4-4495-87b0-ec266872eb12","Type":"ContainerDied","Data":"4d9e0a84da8f191972cd048e101e3cd6029560ea1537fa6b0b79bb80a6aa52cf"} Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.611399 4842 scope.go:117] "RemoveContainer" containerID="e936be960fc6a4acd631d5e4fcc059849d751995376968cab91ef3cd5907201b" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.626920 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-74vp9" podStartSLOduration=4.756231769 podStartE2EDuration="47.626905899s" podCreationTimestamp="2026-02-02 06:48:46 +0000 UTC" firstStartedPulling="2026-02-02 06:48:50.17562678 +0000 UTC m=+155.552894722" lastFinishedPulling="2026-02-02 06:49:33.04630094 +0000 UTC m=+198.423568852" observedRunningTime="2026-02-02 06:49:33.621914777 +0000 UTC m=+198.999182689" watchObservedRunningTime="2026-02-02 06:49:33.626905899 +0000 UTC m=+199.004173801" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.633576 4842 scope.go:117] "RemoveContainer" containerID="7631b0b59937c4a2a88980f2a0026660fe847cb4cbe41b4698eeef6e106359e6" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.643707 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5l5m7" podStartSLOduration=2.690838227 podStartE2EDuration="43.643677107s" podCreationTimestamp="2026-02-02 06:48:50 +0000 UTC" firstStartedPulling="2026-02-02 06:48:52.044198951 +0000 UTC m=+157.421466863" lastFinishedPulling="2026-02-02 06:49:32.997037821 +0000 UTC m=+198.374305743" observedRunningTime="2026-02-02 06:49:33.641366401 +0000 UTC m=+199.018634313" watchObservedRunningTime="2026-02-02 06:49:33.643677107 +0000 UTC m=+199.020945019" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.653277 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wjfbs"] Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.657588 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wjfbs"] Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.670041 4842 scope.go:117] "RemoveContainer" containerID="e5acdc10177108fa441e86a0649b2035781aef8bfbfa243aa0504a82b02bbf9f" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.687556 4842 scope.go:117] "RemoveContainer" containerID="e936be960fc6a4acd631d5e4fcc059849d751995376968cab91ef3cd5907201b" Feb 02 06:49:33 crc kubenswrapper[4842]: E0202 06:49:33.688451 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e936be960fc6a4acd631d5e4fcc059849d751995376968cab91ef3cd5907201b\": container with ID starting with e936be960fc6a4acd631d5e4fcc059849d751995376968cab91ef3cd5907201b not found: ID does not exist" containerID="e936be960fc6a4acd631d5e4fcc059849d751995376968cab91ef3cd5907201b" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.688502 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e936be960fc6a4acd631d5e4fcc059849d751995376968cab91ef3cd5907201b"} err="failed to get container status \"e936be960fc6a4acd631d5e4fcc059849d751995376968cab91ef3cd5907201b\": rpc error: code = NotFound desc = could not find container \"e936be960fc6a4acd631d5e4fcc059849d751995376968cab91ef3cd5907201b\": container with ID starting with e936be960fc6a4acd631d5e4fcc059849d751995376968cab91ef3cd5907201b not found: ID does not exist" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.688536 4842 scope.go:117] "RemoveContainer" containerID="7631b0b59937c4a2a88980f2a0026660fe847cb4cbe41b4698eeef6e106359e6" Feb 02 06:49:33 crc kubenswrapper[4842]: E0202 06:49:33.689595 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7631b0b59937c4a2a88980f2a0026660fe847cb4cbe41b4698eeef6e106359e6\": container with ID starting with 7631b0b59937c4a2a88980f2a0026660fe847cb4cbe41b4698eeef6e106359e6 not found: ID does not exist" containerID="7631b0b59937c4a2a88980f2a0026660fe847cb4cbe41b4698eeef6e106359e6" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.689682 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7631b0b59937c4a2a88980f2a0026660fe847cb4cbe41b4698eeef6e106359e6"} err="failed to get container status \"7631b0b59937c4a2a88980f2a0026660fe847cb4cbe41b4698eeef6e106359e6\": rpc error: code = NotFound desc = could not find container \"7631b0b59937c4a2a88980f2a0026660fe847cb4cbe41b4698eeef6e106359e6\": container with ID starting with 7631b0b59937c4a2a88980f2a0026660fe847cb4cbe41b4698eeef6e106359e6 not found: ID does not exist" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.689741 4842 scope.go:117] "RemoveContainer" containerID="e5acdc10177108fa441e86a0649b2035781aef8bfbfa243aa0504a82b02bbf9f" Feb 02 06:49:33 crc kubenswrapper[4842]: E0202 06:49:33.690489 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5acdc10177108fa441e86a0649b2035781aef8bfbfa243aa0504a82b02bbf9f\": container with ID starting with e5acdc10177108fa441e86a0649b2035781aef8bfbfa243aa0504a82b02bbf9f not found: ID does not exist" containerID="e5acdc10177108fa441e86a0649b2035781aef8bfbfa243aa0504a82b02bbf9f" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.690537 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5acdc10177108fa441e86a0649b2035781aef8bfbfa243aa0504a82b02bbf9f"} err="failed to get container status \"e5acdc10177108fa441e86a0649b2035781aef8bfbfa243aa0504a82b02bbf9f\": rpc error: code = NotFound desc = could not find container \"e5acdc10177108fa441e86a0649b2035781aef8bfbfa243aa0504a82b02bbf9f\": container with ID starting with e5acdc10177108fa441e86a0649b2035781aef8bfbfa243aa0504a82b02bbf9f not found: ID does not exist" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.954225 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 06:49:34 crc kubenswrapper[4842]: I0202 06:49:34.060532 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cedde76f-459c-4b6b-8535-407c5e392ae7-kube-api-access\") pod \"cedde76f-459c-4b6b-8535-407c5e392ae7\" (UID: \"cedde76f-459c-4b6b-8535-407c5e392ae7\") " Feb 02 06:49:34 crc kubenswrapper[4842]: I0202 06:49:34.060635 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cedde76f-459c-4b6b-8535-407c5e392ae7-kubelet-dir\") pod \"cedde76f-459c-4b6b-8535-407c5e392ae7\" (UID: \"cedde76f-459c-4b6b-8535-407c5e392ae7\") " Feb 02 06:49:34 crc kubenswrapper[4842]: I0202 06:49:34.060751 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cedde76f-459c-4b6b-8535-407c5e392ae7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cedde76f-459c-4b6b-8535-407c5e392ae7" (UID: "cedde76f-459c-4b6b-8535-407c5e392ae7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:49:34 crc kubenswrapper[4842]: I0202 06:49:34.060940 4842 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cedde76f-459c-4b6b-8535-407c5e392ae7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:34 crc kubenswrapper[4842]: I0202 06:49:34.063589 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cedde76f-459c-4b6b-8535-407c5e392ae7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cedde76f-459c-4b6b-8535-407c5e392ae7" (UID: "cedde76f-459c-4b6b-8535-407c5e392ae7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:49:34 crc kubenswrapper[4842]: I0202 06:49:34.161927 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cedde76f-459c-4b6b-8535-407c5e392ae7-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:34 crc kubenswrapper[4842]: I0202 06:49:34.618460 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cedde76f-459c-4b6b-8535-407c5e392ae7","Type":"ContainerDied","Data":"a5a41fed2e4b794d72cb0daf4150c5e8b6c1d27aef982c793474fc7005b5b1b4"} Feb 02 06:49:34 crc kubenswrapper[4842]: I0202 06:49:34.618502 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5a41fed2e4b794d72cb0daf4150c5e8b6c1d27aef982c793474fc7005b5b1b4" Feb 02 06:49:34 crc kubenswrapper[4842]: I0202 06:49:34.618512 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 06:49:34 crc kubenswrapper[4842]: I0202 06:49:34.621484 4842 generic.go:334] "Generic (PLEG): container finished" podID="c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" containerID="26bc39f7ea1cc33a68a13fb29a60d43afd7bf35d627c4079450a37e3dff62568" exitCode=0 Feb 02 06:49:34 crc kubenswrapper[4842]: I0202 06:49:34.621572 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l9qkz" event={"ID":"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb","Type":"ContainerDied","Data":"26bc39f7ea1cc33a68a13fb29a60d43afd7bf35d627c4079450a37e3dff62568"} Feb 02 06:49:34 crc kubenswrapper[4842]: I0202 06:49:34.623789 4842 generic.go:334] "Generic (PLEG): container finished" podID="69e94ec9-2a3b-4f85-a2b7-9e2f07359890" containerID="e44426c8cdd109cadacef3f6400e5d74ea8d1d653b5ed8dbe5f5917e6c3ffd35" exitCode=0 Feb 02 06:49:34 crc kubenswrapper[4842]: I0202 06:49:34.623819 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5jt7" event={"ID":"69e94ec9-2a3b-4f85-a2b7-9e2f07359890","Type":"ContainerDied","Data":"e44426c8cdd109cadacef3f6400e5d74ea8d1d653b5ed8dbe5f5917e6c3ffd35"} Feb 02 06:49:35 crc kubenswrapper[4842]: I0202 06:49:35.440051 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7be4c568-0aa4-4495-87b0-ec266872eb12" path="/var/lib/kubelet/pods/7be4c568-0aa4-4495-87b0-ec266872eb12/volumes" Feb 02 06:49:35 crc kubenswrapper[4842]: I0202 06:49:35.630303 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l9qkz" event={"ID":"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb","Type":"ContainerStarted","Data":"2bf2c11f1ca39125eb285b3c434e4d99866c2230b07228184367c7c4ce810926"} Feb 02 06:49:35 crc kubenswrapper[4842]: I0202 06:49:35.632192 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5jt7" event={"ID":"69e94ec9-2a3b-4f85-a2b7-9e2f07359890","Type":"ContainerStarted","Data":"85f5ced4ee389cf80b2537c6c6be6222dce94b986e1132434f4b542801563946"} Feb 02 06:49:35 crc kubenswrapper[4842]: I0202 06:49:35.647173 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l9qkz" podStartSLOduration=4.794729785 podStartE2EDuration="49.647154821s" podCreationTimestamp="2026-02-02 06:48:46 +0000 UTC" firstStartedPulling="2026-02-02 06:48:50.176267545 +0000 UTC m=+155.553535487" lastFinishedPulling="2026-02-02 06:49:35.028692611 +0000 UTC m=+200.405960523" observedRunningTime="2026-02-02 06:49:35.645719616 +0000 UTC m=+201.022987528" watchObservedRunningTime="2026-02-02 06:49:35.647154821 +0000 UTC m=+201.024422733" Feb 02 06:49:35 crc kubenswrapper[4842]: I0202 06:49:35.662333 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z5jt7" podStartSLOduration=3.524232864 podStartE2EDuration="49.6623144s" podCreationTimestamp="2026-02-02 06:48:46 +0000 UTC" firstStartedPulling="2026-02-02 06:48:48.930635679 +0000 UTC m=+154.307903581" lastFinishedPulling="2026-02-02 06:49:35.068717205 +0000 UTC m=+200.445985117" observedRunningTime="2026-02-02 06:49:35.6610903 +0000 UTC m=+201.038358212" watchObservedRunningTime="2026-02-02 06:49:35.6623144 +0000 UTC m=+201.039582312" Feb 02 06:49:36 crc kubenswrapper[4842]: I0202 06:49:36.812103 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:49:36 crc kubenswrapper[4842]: I0202 06:49:36.812164 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:49:36 crc kubenswrapper[4842]: I0202 06:49:36.853271 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.209323 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.209409 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.258339 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.365842 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.366133 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.419501 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.505440 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.505480 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.561610 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.710132 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.827775 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 06:49:37 crc kubenswrapper[4842]: E0202 06:49:37.828111 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be4c568-0aa4-4495-87b0-ec266872eb12" containerName="registry-server" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.828132 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be4c568-0aa4-4495-87b0-ec266872eb12" containerName="registry-server" Feb 02 06:49:37 crc kubenswrapper[4842]: E0202 06:49:37.828152 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be4c568-0aa4-4495-87b0-ec266872eb12" containerName="extract-content" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.828165 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be4c568-0aa4-4495-87b0-ec266872eb12" containerName="extract-content" Feb 02 06:49:37 crc kubenswrapper[4842]: E0202 06:49:37.828187 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cedde76f-459c-4b6b-8535-407c5e392ae7" containerName="pruner" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.828201 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="cedde76f-459c-4b6b-8535-407c5e392ae7" containerName="pruner" Feb 02 06:49:37 crc kubenswrapper[4842]: E0202 06:49:37.828267 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be4c568-0aa4-4495-87b0-ec266872eb12" containerName="extract-utilities" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.828286 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be4c568-0aa4-4495-87b0-ec266872eb12" containerName="extract-utilities" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.828512 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="cedde76f-459c-4b6b-8535-407c5e392ae7" containerName="pruner" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.828536 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be4c568-0aa4-4495-87b0-ec266872eb12" containerName="registry-server" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.829104 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.831754 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.831905 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.869537 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 06:49:38 crc kubenswrapper[4842]: I0202 06:49:38.010663 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ea82b6bc-5c1e-496e-8501-45fdb7220cbb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 06:49:38 crc kubenswrapper[4842]: I0202 06:49:38.010743 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-kube-api-access\") pod \"installer-9-crc\" (UID: \"ea82b6bc-5c1e-496e-8501-45fdb7220cbb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 06:49:38 crc kubenswrapper[4842]: I0202 06:49:38.010974 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-var-lock\") pod \"installer-9-crc\" (UID: \"ea82b6bc-5c1e-496e-8501-45fdb7220cbb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 06:49:38 crc kubenswrapper[4842]: I0202 06:49:38.112277 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-var-lock\") pod \"installer-9-crc\" (UID: \"ea82b6bc-5c1e-496e-8501-45fdb7220cbb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 06:49:38 crc kubenswrapper[4842]: I0202 06:49:38.112342 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ea82b6bc-5c1e-496e-8501-45fdb7220cbb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 06:49:38 crc kubenswrapper[4842]: I0202 06:49:38.112366 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-kube-api-access\") pod \"installer-9-crc\" (UID: \"ea82b6bc-5c1e-496e-8501-45fdb7220cbb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 06:49:38 crc kubenswrapper[4842]: I0202 06:49:38.112729 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-var-lock\") pod \"installer-9-crc\" (UID: \"ea82b6bc-5c1e-496e-8501-45fdb7220cbb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 06:49:38 crc kubenswrapper[4842]: I0202 06:49:38.112772 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ea82b6bc-5c1e-496e-8501-45fdb7220cbb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 06:49:38 crc kubenswrapper[4842]: I0202 06:49:38.132047 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-kube-api-access\") pod \"installer-9-crc\" (UID: \"ea82b6bc-5c1e-496e-8501-45fdb7220cbb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 06:49:38 crc kubenswrapper[4842]: I0202 06:49:38.146415 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 06:49:38 crc kubenswrapper[4842]: I0202 06:49:38.571412 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 06:49:38 crc kubenswrapper[4842]: W0202 06:49:38.574941 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podea82b6bc_5c1e_496e_8501_45fdb7220cbb.slice/crio-0552a9b96b9d22768298700a35eacdb617d371443cdcdb1aba68d660647a3200 WatchSource:0}: Error finding container 0552a9b96b9d22768298700a35eacdb617d371443cdcdb1aba68d660647a3200: Status 404 returned error can't find the container with id 0552a9b96b9d22768298700a35eacdb617d371443cdcdb1aba68d660647a3200 Feb 02 06:49:38 crc kubenswrapper[4842]: I0202 06:49:38.647245 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ea82b6bc-5c1e-496e-8501-45fdb7220cbb","Type":"ContainerStarted","Data":"0552a9b96b9d22768298700a35eacdb617d371443cdcdb1aba68d660647a3200"} Feb 02 06:49:39 crc kubenswrapper[4842]: I0202 06:49:39.160396 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:49:39 crc kubenswrapper[4842]: I0202 06:49:39.161775 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:49:39 crc kubenswrapper[4842]: I0202 06:49:39.203929 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:49:39 crc kubenswrapper[4842]: I0202 06:49:39.656029 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ea82b6bc-5c1e-496e-8501-45fdb7220cbb","Type":"ContainerStarted","Data":"240ef4d9719e0e125f80aaba75a288ed11f634bda46b01e82f75011b4bb97529"} Feb 02 06:49:39 crc kubenswrapper[4842]: I0202 06:49:39.683614 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.683585608 podStartE2EDuration="2.683585608s" podCreationTimestamp="2026-02-02 06:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:49:39.678624407 +0000 UTC m=+205.055892369" watchObservedRunningTime="2026-02-02 06:49:39.683585608 +0000 UTC m=+205.060853560" Feb 02 06:49:39 crc kubenswrapper[4842]: I0202 06:49:39.724475 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:49:40 crc kubenswrapper[4842]: I0202 06:49:40.702639 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:49:40 crc kubenswrapper[4842]: I0202 06:49:40.703100 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:49:40 crc kubenswrapper[4842]: I0202 06:49:40.765806 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.096768 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9mdpt"] Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.097013 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9mdpt" podUID="0401543d-1af2-45fd-a8e1-05cec083bdd7" containerName="registry-server" containerID="cri-o://78e9529b82e73aa19433041fe4d23066cbcbc288f5d51f46315d8056d17cf0f6" gracePeriod=2 Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.535590 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.659608 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtcmj\" (UniqueName: \"kubernetes.io/projected/0401543d-1af2-45fd-a8e1-05cec083bdd7-kube-api-access-dtcmj\") pod \"0401543d-1af2-45fd-a8e1-05cec083bdd7\" (UID: \"0401543d-1af2-45fd-a8e1-05cec083bdd7\") " Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.659710 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0401543d-1af2-45fd-a8e1-05cec083bdd7-catalog-content\") pod \"0401543d-1af2-45fd-a8e1-05cec083bdd7\" (UID: \"0401543d-1af2-45fd-a8e1-05cec083bdd7\") " Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.659843 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0401543d-1af2-45fd-a8e1-05cec083bdd7-utilities\") pod \"0401543d-1af2-45fd-a8e1-05cec083bdd7\" (UID: \"0401543d-1af2-45fd-a8e1-05cec083bdd7\") " Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.661701 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0401543d-1af2-45fd-a8e1-05cec083bdd7-utilities" (OuterVolumeSpecName: "utilities") pod "0401543d-1af2-45fd-a8e1-05cec083bdd7" (UID: "0401543d-1af2-45fd-a8e1-05cec083bdd7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.665666 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0401543d-1af2-45fd-a8e1-05cec083bdd7-kube-api-access-dtcmj" (OuterVolumeSpecName: "kube-api-access-dtcmj") pod "0401543d-1af2-45fd-a8e1-05cec083bdd7" (UID: "0401543d-1af2-45fd-a8e1-05cec083bdd7"). InnerVolumeSpecName "kube-api-access-dtcmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.687410 4842 generic.go:334] "Generic (PLEG): container finished" podID="0401543d-1af2-45fd-a8e1-05cec083bdd7" containerID="78e9529b82e73aa19433041fe4d23066cbcbc288f5d51f46315d8056d17cf0f6" exitCode=0 Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.687504 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.687648 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9mdpt" event={"ID":"0401543d-1af2-45fd-a8e1-05cec083bdd7","Type":"ContainerDied","Data":"78e9529b82e73aa19433041fe4d23066cbcbc288f5d51f46315d8056d17cf0f6"} Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.687792 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9mdpt" event={"ID":"0401543d-1af2-45fd-a8e1-05cec083bdd7","Type":"ContainerDied","Data":"ad1fd21c691dc675b62fad95a6e7e8ad52ebcb62e20c4eefb0dc3125badfd973"} Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.687894 4842 scope.go:117] "RemoveContainer" containerID="78e9529b82e73aa19433041fe4d23066cbcbc288f5d51f46315d8056d17cf0f6" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.728485 4842 scope.go:117] "RemoveContainer" containerID="eaf9d6c021e806051d6b0ac858b58d93cb7766dc6129686409ffda36e557eccd" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.754003 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0401543d-1af2-45fd-a8e1-05cec083bdd7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0401543d-1af2-45fd-a8e1-05cec083bdd7" (UID: "0401543d-1af2-45fd-a8e1-05cec083bdd7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.761843 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtcmj\" (UniqueName: \"kubernetes.io/projected/0401543d-1af2-45fd-a8e1-05cec083bdd7-kube-api-access-dtcmj\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.761876 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0401543d-1af2-45fd-a8e1-05cec083bdd7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.761888 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0401543d-1af2-45fd-a8e1-05cec083bdd7-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.768870 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.769982 4842 scope.go:117] "RemoveContainer" containerID="1b665abd516c92090ff869fab9ed846ef67fb35ff96dbe511b66a77bb2b78db0" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.805795 4842 scope.go:117] "RemoveContainer" containerID="78e9529b82e73aa19433041fe4d23066cbcbc288f5d51f46315d8056d17cf0f6" Feb 02 06:49:41 crc kubenswrapper[4842]: E0202 06:49:41.814862 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78e9529b82e73aa19433041fe4d23066cbcbc288f5d51f46315d8056d17cf0f6\": container with ID starting with 78e9529b82e73aa19433041fe4d23066cbcbc288f5d51f46315d8056d17cf0f6 not found: ID does not exist" containerID="78e9529b82e73aa19433041fe4d23066cbcbc288f5d51f46315d8056d17cf0f6" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.814920 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78e9529b82e73aa19433041fe4d23066cbcbc288f5d51f46315d8056d17cf0f6"} err="failed to get container status \"78e9529b82e73aa19433041fe4d23066cbcbc288f5d51f46315d8056d17cf0f6\": rpc error: code = NotFound desc = could not find container \"78e9529b82e73aa19433041fe4d23066cbcbc288f5d51f46315d8056d17cf0f6\": container with ID starting with 78e9529b82e73aa19433041fe4d23066cbcbc288f5d51f46315d8056d17cf0f6 not found: ID does not exist" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.814955 4842 scope.go:117] "RemoveContainer" containerID="eaf9d6c021e806051d6b0ac858b58d93cb7766dc6129686409ffda36e557eccd" Feb 02 06:49:41 crc kubenswrapper[4842]: E0202 06:49:41.815442 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaf9d6c021e806051d6b0ac858b58d93cb7766dc6129686409ffda36e557eccd\": container with ID starting with eaf9d6c021e806051d6b0ac858b58d93cb7766dc6129686409ffda36e557eccd not found: ID does not exist" containerID="eaf9d6c021e806051d6b0ac858b58d93cb7766dc6129686409ffda36e557eccd" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.815514 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaf9d6c021e806051d6b0ac858b58d93cb7766dc6129686409ffda36e557eccd"} err="failed to get container status \"eaf9d6c021e806051d6b0ac858b58d93cb7766dc6129686409ffda36e557eccd\": rpc error: code = NotFound desc = could not find container \"eaf9d6c021e806051d6b0ac858b58d93cb7766dc6129686409ffda36e557eccd\": container with ID starting with eaf9d6c021e806051d6b0ac858b58d93cb7766dc6129686409ffda36e557eccd not found: ID does not exist" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.815580 4842 scope.go:117] "RemoveContainer" containerID="1b665abd516c92090ff869fab9ed846ef67fb35ff96dbe511b66a77bb2b78db0" Feb 02 06:49:41 crc kubenswrapper[4842]: E0202 06:49:41.816078 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b665abd516c92090ff869fab9ed846ef67fb35ff96dbe511b66a77bb2b78db0\": container with ID starting with 1b665abd516c92090ff869fab9ed846ef67fb35ff96dbe511b66a77bb2b78db0 not found: ID does not exist" containerID="1b665abd516c92090ff869fab9ed846ef67fb35ff96dbe511b66a77bb2b78db0" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.816163 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b665abd516c92090ff869fab9ed846ef67fb35ff96dbe511b66a77bb2b78db0"} err="failed to get container status \"1b665abd516c92090ff869fab9ed846ef67fb35ff96dbe511b66a77bb2b78db0\": rpc error: code = NotFound desc = could not find container \"1b665abd516c92090ff869fab9ed846ef67fb35ff96dbe511b66a77bb2b78db0\": container with ID starting with 1b665abd516c92090ff869fab9ed846ef67fb35ff96dbe511b66a77bb2b78db0 not found: ID does not exist" Feb 02 06:49:42 crc kubenswrapper[4842]: I0202 06:49:42.036509 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9mdpt"] Feb 02 06:49:42 crc kubenswrapper[4842]: I0202 06:49:42.044207 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9mdpt"] Feb 02 06:49:42 crc kubenswrapper[4842]: I0202 06:49:42.146351 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 06:49:42 crc kubenswrapper[4842]: I0202 06:49:42.146434 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 06:49:42 crc kubenswrapper[4842]: I0202 06:49:42.146497 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:49:42 crc kubenswrapper[4842]: I0202 06:49:42.147336 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 06:49:42 crc kubenswrapper[4842]: I0202 06:49:42.147456 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5" gracePeriod=600 Feb 02 06:49:42 crc kubenswrapper[4842]: I0202 06:49:42.696779 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5" exitCode=0 Feb 02 06:49:42 crc kubenswrapper[4842]: I0202 06:49:42.696857 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5"} Feb 02 06:49:42 crc kubenswrapper[4842]: I0202 06:49:42.697115 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"26f863875b25adddb851bd7939cdd2a355f863cc15cc7b84383d70ddfd11cabb"} Feb 02 06:49:43 crc kubenswrapper[4842]: I0202 06:49:43.445836 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0401543d-1af2-45fd-a8e1-05cec083bdd7" path="/var/lib/kubelet/pods/0401543d-1af2-45fd-a8e1-05cec083bdd7/volumes" Feb 02 06:49:43 crc kubenswrapper[4842]: I0202 06:49:43.487325 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m6ms7"] Feb 02 06:49:43 crc kubenswrapper[4842]: I0202 06:49:43.487679 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m6ms7" podUID="eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" containerName="registry-server" containerID="cri-o://f5fe3ff29a99306622ed83546bc7f2e5eae5880c68b19bacf3a85ef4ebbe4489" gracePeriod=2 Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.189693 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-99f997678-95hv6"] Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.190421 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-99f997678-95hv6" podUID="0b226528-cbee-4e1b-a63a-2e9cb152a9a5" containerName="controller-manager" containerID="cri-o://460312f0fdda5f4c6106f8723d73d45f294eafbd8190af71f258393d8fc703a6" gracePeriod=30 Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.211655 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df"] Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.211970 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" podUID="2cd1f864-6b9b-4113-b65e-446049b9af92" containerName="route-controller-manager" containerID="cri-o://55e75296f0e6047802f588fbbf9926e666199b348dea699c186a87607d8698c7" gracePeriod=30 Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.576414 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.704068 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-utilities\") pod \"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb\" (UID: \"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb\") " Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.704136 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-catalog-content\") pod \"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb\" (UID: \"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb\") " Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.704199 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwfcq\" (UniqueName: \"kubernetes.io/projected/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-kube-api-access-jwfcq\") pod \"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb\" (UID: \"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb\") " Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.706040 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-utilities" (OuterVolumeSpecName: "utilities") pod "eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" (UID: "eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.718712 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-kube-api-access-jwfcq" (OuterVolumeSpecName: "kube-api-access-jwfcq") pod "eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" (UID: "eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb"). InnerVolumeSpecName "kube-api-access-jwfcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.729798 4842 generic.go:334] "Generic (PLEG): container finished" podID="2cd1f864-6b9b-4113-b65e-446049b9af92" containerID="55e75296f0e6047802f588fbbf9926e666199b348dea699c186a87607d8698c7" exitCode=0 Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.729804 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" (UID: "eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.729880 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" event={"ID":"2cd1f864-6b9b-4113-b65e-446049b9af92","Type":"ContainerDied","Data":"55e75296f0e6047802f588fbbf9926e666199b348dea699c186a87607d8698c7"} Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.730162 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" event={"ID":"2cd1f864-6b9b-4113-b65e-446049b9af92","Type":"ContainerDied","Data":"0429779ecc8d7f354927858d9f829de9c008478a695454154ec2b53a1da0abb2"} Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.730276 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0429779ecc8d7f354927858d9f829de9c008478a695454154ec2b53a1da0abb2" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.731400 4842 generic.go:334] "Generic (PLEG): container finished" podID="0b226528-cbee-4e1b-a63a-2e9cb152a9a5" containerID="460312f0fdda5f4c6106f8723d73d45f294eafbd8190af71f258393d8fc703a6" exitCode=0 Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.731503 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-99f997678-95hv6" event={"ID":"0b226528-cbee-4e1b-a63a-2e9cb152a9a5","Type":"ContainerDied","Data":"460312f0fdda5f4c6106f8723d73d45f294eafbd8190af71f258393d8fc703a6"} Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.733170 4842 generic.go:334] "Generic (PLEG): container finished" podID="eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" containerID="f5fe3ff29a99306622ed83546bc7f2e5eae5880c68b19bacf3a85ef4ebbe4489" exitCode=0 Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.733255 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.733283 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m6ms7" event={"ID":"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb","Type":"ContainerDied","Data":"f5fe3ff29a99306622ed83546bc7f2e5eae5880c68b19bacf3a85ef4ebbe4489"} Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.735408 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m6ms7" event={"ID":"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb","Type":"ContainerDied","Data":"d839d2fe1ddee6dc1ee5e5c2514aaebc941a9e75e08e10d40cd5d9caf2627fd2"} Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.735459 4842 scope.go:117] "RemoveContainer" containerID="f5fe3ff29a99306622ed83546bc7f2e5eae5880c68b19bacf3a85ef4ebbe4489" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.751354 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.762635 4842 scope.go:117] "RemoveContainer" containerID="df039b89a3cc566c5bb891b0ad1811eb0ba3b5b7e84a10777cf32c394169a4ca" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.770898 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m6ms7"] Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.773537 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m6ms7"] Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.795013 4842 scope.go:117] "RemoveContainer" containerID="d79b8cf4d7bb1113fe8f1b4ee67187f662ef997ced43c01af79821854dc7c65d" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.805496 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.805526 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.805541 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwfcq\" (UniqueName: \"kubernetes.io/projected/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-kube-api-access-jwfcq\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.826054 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.833108 4842 scope.go:117] "RemoveContainer" containerID="f5fe3ff29a99306622ed83546bc7f2e5eae5880c68b19bacf3a85ef4ebbe4489" Feb 02 06:49:44 crc kubenswrapper[4842]: E0202 06:49:44.833566 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5fe3ff29a99306622ed83546bc7f2e5eae5880c68b19bacf3a85ef4ebbe4489\": container with ID starting with f5fe3ff29a99306622ed83546bc7f2e5eae5880c68b19bacf3a85ef4ebbe4489 not found: ID does not exist" containerID="f5fe3ff29a99306622ed83546bc7f2e5eae5880c68b19bacf3a85ef4ebbe4489" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.833672 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5fe3ff29a99306622ed83546bc7f2e5eae5880c68b19bacf3a85ef4ebbe4489"} err="failed to get container status \"f5fe3ff29a99306622ed83546bc7f2e5eae5880c68b19bacf3a85ef4ebbe4489\": rpc error: code = NotFound desc = could not find container \"f5fe3ff29a99306622ed83546bc7f2e5eae5880c68b19bacf3a85ef4ebbe4489\": container with ID starting with f5fe3ff29a99306622ed83546bc7f2e5eae5880c68b19bacf3a85ef4ebbe4489 not found: ID does not exist" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.833767 4842 scope.go:117] "RemoveContainer" containerID="df039b89a3cc566c5bb891b0ad1811eb0ba3b5b7e84a10777cf32c394169a4ca" Feb 02 06:49:44 crc kubenswrapper[4842]: E0202 06:49:44.834154 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df039b89a3cc566c5bb891b0ad1811eb0ba3b5b7e84a10777cf32c394169a4ca\": container with ID starting with df039b89a3cc566c5bb891b0ad1811eb0ba3b5b7e84a10777cf32c394169a4ca not found: ID does not exist" containerID="df039b89a3cc566c5bb891b0ad1811eb0ba3b5b7e84a10777cf32c394169a4ca" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.834256 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df039b89a3cc566c5bb891b0ad1811eb0ba3b5b7e84a10777cf32c394169a4ca"} err="failed to get container status \"df039b89a3cc566c5bb891b0ad1811eb0ba3b5b7e84a10777cf32c394169a4ca\": rpc error: code = NotFound desc = could not find container \"df039b89a3cc566c5bb891b0ad1811eb0ba3b5b7e84a10777cf32c394169a4ca\": container with ID starting with df039b89a3cc566c5bb891b0ad1811eb0ba3b5b7e84a10777cf32c394169a4ca not found: ID does not exist" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.834354 4842 scope.go:117] "RemoveContainer" containerID="d79b8cf4d7bb1113fe8f1b4ee67187f662ef997ced43c01af79821854dc7c65d" Feb 02 06:49:44 crc kubenswrapper[4842]: E0202 06:49:44.834914 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d79b8cf4d7bb1113fe8f1b4ee67187f662ef997ced43c01af79821854dc7c65d\": container with ID starting with d79b8cf4d7bb1113fe8f1b4ee67187f662ef997ced43c01af79821854dc7c65d not found: ID does not exist" containerID="d79b8cf4d7bb1113fe8f1b4ee67187f662ef997ced43c01af79821854dc7c65d" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.834997 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d79b8cf4d7bb1113fe8f1b4ee67187f662ef997ced43c01af79821854dc7c65d"} err="failed to get container status \"d79b8cf4d7bb1113fe8f1b4ee67187f662ef997ced43c01af79821854dc7c65d\": rpc error: code = NotFound desc = could not find container \"d79b8cf4d7bb1113fe8f1b4ee67187f662ef997ced43c01af79821854dc7c65d\": container with ID starting with d79b8cf4d7bb1113fe8f1b4ee67187f662ef997ced43c01af79821854dc7c65d not found: ID does not exist" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.906927 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cd1f864-6b9b-4113-b65e-446049b9af92-config\") pod \"2cd1f864-6b9b-4113-b65e-446049b9af92\" (UID: \"2cd1f864-6b9b-4113-b65e-446049b9af92\") " Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.906977 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrbqw\" (UniqueName: \"kubernetes.io/projected/2cd1f864-6b9b-4113-b65e-446049b9af92-kube-api-access-jrbqw\") pod \"2cd1f864-6b9b-4113-b65e-446049b9af92\" (UID: \"2cd1f864-6b9b-4113-b65e-446049b9af92\") " Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.907007 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cd1f864-6b9b-4113-b65e-446049b9af92-client-ca\") pod \"2cd1f864-6b9b-4113-b65e-446049b9af92\" (UID: \"2cd1f864-6b9b-4113-b65e-446049b9af92\") " Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.907035 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cd1f864-6b9b-4113-b65e-446049b9af92-serving-cert\") pod \"2cd1f864-6b9b-4113-b65e-446049b9af92\" (UID: \"2cd1f864-6b9b-4113-b65e-446049b9af92\") " Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.907850 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cd1f864-6b9b-4113-b65e-446049b9af92-config" (OuterVolumeSpecName: "config") pod "2cd1f864-6b9b-4113-b65e-446049b9af92" (UID: "2cd1f864-6b9b-4113-b65e-446049b9af92"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.908162 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cd1f864-6b9b-4113-b65e-446049b9af92-client-ca" (OuterVolumeSpecName: "client-ca") pod "2cd1f864-6b9b-4113-b65e-446049b9af92" (UID: "2cd1f864-6b9b-4113-b65e-446049b9af92"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.910481 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cd1f864-6b9b-4113-b65e-446049b9af92-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2cd1f864-6b9b-4113-b65e-446049b9af92" (UID: "2cd1f864-6b9b-4113-b65e-446049b9af92"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.910497 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cd1f864-6b9b-4113-b65e-446049b9af92-kube-api-access-jrbqw" (OuterVolumeSpecName: "kube-api-access-jrbqw") pod "2cd1f864-6b9b-4113-b65e-446049b9af92" (UID: "2cd1f864-6b9b-4113-b65e-446049b9af92"). InnerVolumeSpecName "kube-api-access-jrbqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.007936 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-config\") pod \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.008193 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-serving-cert\") pod \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.008288 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-proxy-ca-bundles\") pod \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.008332 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-client-ca\") pod \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.008392 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gr2f\" (UniqueName: \"kubernetes.io/projected/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-kube-api-access-4gr2f\") pod \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.008818 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cd1f864-6b9b-4113-b65e-446049b9af92-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.008878 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cd1f864-6b9b-4113-b65e-446049b9af92-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.008908 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrbqw\" (UniqueName: \"kubernetes.io/projected/2cd1f864-6b9b-4113-b65e-446049b9af92-kube-api-access-jrbqw\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.008935 4842 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cd1f864-6b9b-4113-b65e-446049b9af92-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.009415 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0b226528-cbee-4e1b-a63a-2e9cb152a9a5" (UID: "0b226528-cbee-4e1b-a63a-2e9cb152a9a5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.009628 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-client-ca" (OuterVolumeSpecName: "client-ca") pod "0b226528-cbee-4e1b-a63a-2e9cb152a9a5" (UID: "0b226528-cbee-4e1b-a63a-2e9cb152a9a5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.009915 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-config" (OuterVolumeSpecName: "config") pod "0b226528-cbee-4e1b-a63a-2e9cb152a9a5" (UID: "0b226528-cbee-4e1b-a63a-2e9cb152a9a5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.013877 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b226528-cbee-4e1b-a63a-2e9cb152a9a5" (UID: "0b226528-cbee-4e1b-a63a-2e9cb152a9a5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.014555 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-kube-api-access-4gr2f" (OuterVolumeSpecName: "kube-api-access-4gr2f") pod "0b226528-cbee-4e1b-a63a-2e9cb152a9a5" (UID: "0b226528-cbee-4e1b-a63a-2e9cb152a9a5"). InnerVolumeSpecName "kube-api-access-4gr2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.109973 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.110023 4842 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.110038 4842 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.110051 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gr2f\" (UniqueName: \"kubernetes.io/projected/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-kube-api-access-4gr2f\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.110063 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.451793 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" path="/var/lib/kubelet/pods/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb/volumes" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.695900 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-577b8789bf-xqfmj"] Feb 02 06:49:45 crc kubenswrapper[4842]: E0202 06:49:45.696775 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" containerName="registry-server" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.696805 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" containerName="registry-server" Feb 02 06:49:45 crc kubenswrapper[4842]: E0202 06:49:45.696826 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd1f864-6b9b-4113-b65e-446049b9af92" containerName="route-controller-manager" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.696840 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd1f864-6b9b-4113-b65e-446049b9af92" containerName="route-controller-manager" Feb 02 06:49:45 crc kubenswrapper[4842]: E0202 06:49:45.696865 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0401543d-1af2-45fd-a8e1-05cec083bdd7" containerName="registry-server" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.696878 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0401543d-1af2-45fd-a8e1-05cec083bdd7" containerName="registry-server" Feb 02 06:49:45 crc kubenswrapper[4842]: E0202 06:49:45.696892 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b226528-cbee-4e1b-a63a-2e9cb152a9a5" containerName="controller-manager" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.696908 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b226528-cbee-4e1b-a63a-2e9cb152a9a5" containerName="controller-manager" Feb 02 06:49:45 crc kubenswrapper[4842]: E0202 06:49:45.696928 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" containerName="extract-content" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.696940 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" containerName="extract-content" Feb 02 06:49:45 crc kubenswrapper[4842]: E0202 06:49:45.696961 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0401543d-1af2-45fd-a8e1-05cec083bdd7" containerName="extract-content" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.696975 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0401543d-1af2-45fd-a8e1-05cec083bdd7" containerName="extract-content" Feb 02 06:49:45 crc kubenswrapper[4842]: E0202 06:49:45.696998 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" containerName="extract-utilities" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.697011 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" containerName="extract-utilities" Feb 02 06:49:45 crc kubenswrapper[4842]: E0202 06:49:45.697031 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0401543d-1af2-45fd-a8e1-05cec083bdd7" containerName="extract-utilities" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.697045 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0401543d-1af2-45fd-a8e1-05cec083bdd7" containerName="extract-utilities" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.698719 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cd1f864-6b9b-4113-b65e-446049b9af92" containerName="route-controller-manager" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.698765 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="0401543d-1af2-45fd-a8e1-05cec083bdd7" containerName="registry-server" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.698823 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b226528-cbee-4e1b-a63a-2e9cb152a9a5" containerName="controller-manager" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.698842 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" containerName="registry-server" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.701293 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.714548 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd"] Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.717322 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.725580 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-577b8789bf-xqfmj"] Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.729568 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd"] Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.751756 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.754455 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.754821 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-99f997678-95hv6" event={"ID":"0b226528-cbee-4e1b-a63a-2e9cb152a9a5","Type":"ContainerDied","Data":"ff5feb05e1f6a299dda4671dfa6361e0b820e5dc062a808b595cb6a3638ecd2f"} Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.754856 4842 scope.go:117] "RemoveContainer" containerID="460312f0fdda5f4c6106f8723d73d45f294eafbd8190af71f258393d8fc703a6" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.794176 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df"] Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.799297 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df"] Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.806385 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-99f997678-95hv6"] Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.814872 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-99f997678-95hv6"] Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.819168 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-config\") pod \"controller-manager-577b8789bf-xqfmj\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.819206 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-serving-cert\") pod \"controller-manager-577b8789bf-xqfmj\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.819268 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b58hc\" (UniqueName: \"kubernetes.io/projected/81aa66cb-52e6-47c7-a265-f441c27469ab-kube-api-access-b58hc\") pod \"route-controller-manager-f865c6b84-bslhd\" (UID: \"81aa66cb-52e6-47c7-a265-f441c27469ab\") " pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.819371 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81aa66cb-52e6-47c7-a265-f441c27469ab-client-ca\") pod \"route-controller-manager-f865c6b84-bslhd\" (UID: \"81aa66cb-52e6-47c7-a265-f441c27469ab\") " pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.819396 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81aa66cb-52e6-47c7-a265-f441c27469ab-config\") pod \"route-controller-manager-f865c6b84-bslhd\" (UID: \"81aa66cb-52e6-47c7-a265-f441c27469ab\") " pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.819420 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81aa66cb-52e6-47c7-a265-f441c27469ab-serving-cert\") pod \"route-controller-manager-f865c6b84-bslhd\" (UID: \"81aa66cb-52e6-47c7-a265-f441c27469ab\") " pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.819548 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-client-ca\") pod \"controller-manager-577b8789bf-xqfmj\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.819601 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-proxy-ca-bundles\") pod \"controller-manager-577b8789bf-xqfmj\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.819657 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjb8m\" (UniqueName: \"kubernetes.io/projected/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-kube-api-access-xjb8m\") pod \"controller-manager-577b8789bf-xqfmj\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.921061 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-config\") pod \"controller-manager-577b8789bf-xqfmj\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.921375 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-serving-cert\") pod \"controller-manager-577b8789bf-xqfmj\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.922318 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b58hc\" (UniqueName: \"kubernetes.io/projected/81aa66cb-52e6-47c7-a265-f441c27469ab-kube-api-access-b58hc\") pod \"route-controller-manager-f865c6b84-bslhd\" (UID: \"81aa66cb-52e6-47c7-a265-f441c27469ab\") " pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.922361 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81aa66cb-52e6-47c7-a265-f441c27469ab-client-ca\") pod \"route-controller-manager-f865c6b84-bslhd\" (UID: \"81aa66cb-52e6-47c7-a265-f441c27469ab\") " pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.922386 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81aa66cb-52e6-47c7-a265-f441c27469ab-config\") pod \"route-controller-manager-f865c6b84-bslhd\" (UID: \"81aa66cb-52e6-47c7-a265-f441c27469ab\") " pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.922410 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81aa66cb-52e6-47c7-a265-f441c27469ab-serving-cert\") pod \"route-controller-manager-f865c6b84-bslhd\" (UID: \"81aa66cb-52e6-47c7-a265-f441c27469ab\") " pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.922449 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-client-ca\") pod \"controller-manager-577b8789bf-xqfmj\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.922473 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-proxy-ca-bundles\") pod \"controller-manager-577b8789bf-xqfmj\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.922501 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjb8m\" (UniqueName: \"kubernetes.io/projected/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-kube-api-access-xjb8m\") pod \"controller-manager-577b8789bf-xqfmj\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.923134 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-config\") pod \"controller-manager-577b8789bf-xqfmj\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.923909 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81aa66cb-52e6-47c7-a265-f441c27469ab-client-ca\") pod \"route-controller-manager-f865c6b84-bslhd\" (UID: \"81aa66cb-52e6-47c7-a265-f441c27469ab\") " pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.924419 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-client-ca\") pod \"controller-manager-577b8789bf-xqfmj\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.925584 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-proxy-ca-bundles\") pod \"controller-manager-577b8789bf-xqfmj\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.926696 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81aa66cb-52e6-47c7-a265-f441c27469ab-config\") pod \"route-controller-manager-f865c6b84-bslhd\" (UID: \"81aa66cb-52e6-47c7-a265-f441c27469ab\") " pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.928151 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81aa66cb-52e6-47c7-a265-f441c27469ab-serving-cert\") pod \"route-controller-manager-f865c6b84-bslhd\" (UID: \"81aa66cb-52e6-47c7-a265-f441c27469ab\") " pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.932659 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-serving-cert\") pod \"controller-manager-577b8789bf-xqfmj\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.949234 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b58hc\" (UniqueName: \"kubernetes.io/projected/81aa66cb-52e6-47c7-a265-f441c27469ab-kube-api-access-b58hc\") pod \"route-controller-manager-f865c6b84-bslhd\" (UID: \"81aa66cb-52e6-47c7-a265-f441c27469ab\") " pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.950874 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjb8m\" (UniqueName: \"kubernetes.io/projected/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-kube-api-access-xjb8m\") pod \"controller-manager-577b8789bf-xqfmj\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:46 crc kubenswrapper[4842]: I0202 06:49:46.081718 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:49:46 crc kubenswrapper[4842]: I0202 06:49:46.085075 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:46 crc kubenswrapper[4842]: I0202 06:49:46.402452 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd"] Feb 02 06:49:46 crc kubenswrapper[4842]: W0202 06:49:46.415851 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81aa66cb_52e6_47c7_a265_f441c27469ab.slice/crio-9e72e571d2546b7b55a841837009db7f12ec675858678bd32edb3b3f5e9f3847 WatchSource:0}: Error finding container 9e72e571d2546b7b55a841837009db7f12ec675858678bd32edb3b3f5e9f3847: Status 404 returned error can't find the container with id 9e72e571d2546b7b55a841837009db7f12ec675858678bd32edb3b3f5e9f3847 Feb 02 06:49:46 crc kubenswrapper[4842]: I0202 06:49:46.566105 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-577b8789bf-xqfmj"] Feb 02 06:49:46 crc kubenswrapper[4842]: W0202 06:49:46.574341 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12e4df66_5150_49ad_8fe1_a4c7cd09bb97.slice/crio-009f0767a9c6d25730471d2699cc1667960fae6b41aa164b180b1803f5c237c8 WatchSource:0}: Error finding container 009f0767a9c6d25730471d2699cc1667960fae6b41aa164b180b1803f5c237c8: Status 404 returned error can't find the container with id 009f0767a9c6d25730471d2699cc1667960fae6b41aa164b180b1803f5c237c8 Feb 02 06:49:46 crc kubenswrapper[4842]: I0202 06:49:46.758515 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" event={"ID":"12e4df66-5150-49ad-8fe1-a4c7cd09bb97","Type":"ContainerStarted","Data":"768631107ab27a46c91c5b672c3d2cb93e3ebaca049c2f51e26a2fbebfd55d2a"} Feb 02 06:49:46 crc kubenswrapper[4842]: I0202 06:49:46.758823 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:46 crc kubenswrapper[4842]: I0202 06:49:46.758834 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" event={"ID":"12e4df66-5150-49ad-8fe1-a4c7cd09bb97","Type":"ContainerStarted","Data":"009f0767a9c6d25730471d2699cc1667960fae6b41aa164b180b1803f5c237c8"} Feb 02 06:49:46 crc kubenswrapper[4842]: I0202 06:49:46.761691 4842 patch_prober.go:28] interesting pod/controller-manager-577b8789bf-xqfmj container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Feb 02 06:49:46 crc kubenswrapper[4842]: I0202 06:49:46.761727 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" podUID="12e4df66-5150-49ad-8fe1-a4c7cd09bb97" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Feb 02 06:49:46 crc kubenswrapper[4842]: I0202 06:49:46.762425 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" event={"ID":"81aa66cb-52e6-47c7-a265-f441c27469ab","Type":"ContainerStarted","Data":"c16710dc51da216dbe3e32e2e61d1af41762994fc2090d1139fb902be028acba"} Feb 02 06:49:46 crc kubenswrapper[4842]: I0202 06:49:46.762473 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" event={"ID":"81aa66cb-52e6-47c7-a265-f441c27469ab","Type":"ContainerStarted","Data":"9e72e571d2546b7b55a841837009db7f12ec675858678bd32edb3b3f5e9f3847"} Feb 02 06:49:46 crc kubenswrapper[4842]: I0202 06:49:46.762816 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:49:46 crc kubenswrapper[4842]: I0202 06:49:46.784689 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" podStartSLOduration=2.784670754 podStartE2EDuration="2.784670754s" podCreationTimestamp="2026-02-02 06:49:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:49:46.783027964 +0000 UTC m=+212.160295936" watchObservedRunningTime="2026-02-02 06:49:46.784670754 +0000 UTC m=+212.161938666" Feb 02 06:49:46 crc kubenswrapper[4842]: I0202 06:49:46.806880 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" podStartSLOduration=2.806854364 podStartE2EDuration="2.806854364s" podCreationTimestamp="2026-02-02 06:49:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:49:46.80547096 +0000 UTC m=+212.182738912" watchObservedRunningTime="2026-02-02 06:49:46.806854364 +0000 UTC m=+212.184122306" Feb 02 06:49:46 crc kubenswrapper[4842]: I0202 06:49:46.847306 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:49:46 crc kubenswrapper[4842]: I0202 06:49:46.861319 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:49:47 crc kubenswrapper[4842]: I0202 06:49:47.457803 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b226528-cbee-4e1b-a63a-2e9cb152a9a5" path="/var/lib/kubelet/pods/0b226528-cbee-4e1b-a63a-2e9cb152a9a5/volumes" Feb 02 06:49:47 crc kubenswrapper[4842]: I0202 06:49:47.459389 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cd1f864-6b9b-4113-b65e-446049b9af92" path="/var/lib/kubelet/pods/2cd1f864-6b9b-4113-b65e-446049b9af92/volumes" Feb 02 06:49:47 crc kubenswrapper[4842]: I0202 06:49:47.483053 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:49:47 crc kubenswrapper[4842]: I0202 06:49:47.557077 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:49:47 crc kubenswrapper[4842]: I0202 06:49:47.784094 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.288999 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l9qkz"] Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.289451 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l9qkz" podUID="c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" containerName="registry-server" containerID="cri-o://2bf2c11f1ca39125eb285b3c434e4d99866c2230b07228184367c7c4ce810926" gracePeriod=2 Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.771970 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.797907 4842 generic.go:334] "Generic (PLEG): container finished" podID="c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" containerID="2bf2c11f1ca39125eb285b3c434e4d99866c2230b07228184367c7c4ce810926" exitCode=0 Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.797974 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.797991 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l9qkz" event={"ID":"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb","Type":"ContainerDied","Data":"2bf2c11f1ca39125eb285b3c434e4d99866c2230b07228184367c7c4ce810926"} Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.798074 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l9qkz" event={"ID":"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb","Type":"ContainerDied","Data":"5f20b78ac1d8de395289985ed057496cf0e32696d0cdab93b3ce9b9bfd17fab2"} Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.798136 4842 scope.go:117] "RemoveContainer" containerID="2bf2c11f1ca39125eb285b3c434e4d99866c2230b07228184367c7c4ce810926" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.824395 4842 scope.go:117] "RemoveContainer" containerID="26bc39f7ea1cc33a68a13fb29a60d43afd7bf35d627c4079450a37e3dff62568" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.845437 4842 scope.go:117] "RemoveContainer" containerID="1e25dc3d1edea490e1c8cd3b444d5b88a6502a90bad3cef321e8416ee23978b5" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.870971 4842 scope.go:117] "RemoveContainer" containerID="2bf2c11f1ca39125eb285b3c434e4d99866c2230b07228184367c7c4ce810926" Feb 02 06:49:49 crc kubenswrapper[4842]: E0202 06:49:49.871430 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bf2c11f1ca39125eb285b3c434e4d99866c2230b07228184367c7c4ce810926\": container with ID starting with 2bf2c11f1ca39125eb285b3c434e4d99866c2230b07228184367c7c4ce810926 not found: ID does not exist" containerID="2bf2c11f1ca39125eb285b3c434e4d99866c2230b07228184367c7c4ce810926" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.871463 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bf2c11f1ca39125eb285b3c434e4d99866c2230b07228184367c7c4ce810926"} err="failed to get container status \"2bf2c11f1ca39125eb285b3c434e4d99866c2230b07228184367c7c4ce810926\": rpc error: code = NotFound desc = could not find container \"2bf2c11f1ca39125eb285b3c434e4d99866c2230b07228184367c7c4ce810926\": container with ID starting with 2bf2c11f1ca39125eb285b3c434e4d99866c2230b07228184367c7c4ce810926 not found: ID does not exist" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.871493 4842 scope.go:117] "RemoveContainer" containerID="26bc39f7ea1cc33a68a13fb29a60d43afd7bf35d627c4079450a37e3dff62568" Feb 02 06:49:49 crc kubenswrapper[4842]: E0202 06:49:49.871807 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26bc39f7ea1cc33a68a13fb29a60d43afd7bf35d627c4079450a37e3dff62568\": container with ID starting with 26bc39f7ea1cc33a68a13fb29a60d43afd7bf35d627c4079450a37e3dff62568 not found: ID does not exist" containerID="26bc39f7ea1cc33a68a13fb29a60d43afd7bf35d627c4079450a37e3dff62568" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.871832 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26bc39f7ea1cc33a68a13fb29a60d43afd7bf35d627c4079450a37e3dff62568"} err="failed to get container status \"26bc39f7ea1cc33a68a13fb29a60d43afd7bf35d627c4079450a37e3dff62568\": rpc error: code = NotFound desc = could not find container \"26bc39f7ea1cc33a68a13fb29a60d43afd7bf35d627c4079450a37e3dff62568\": container with ID starting with 26bc39f7ea1cc33a68a13fb29a60d43afd7bf35d627c4079450a37e3dff62568 not found: ID does not exist" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.871851 4842 scope.go:117] "RemoveContainer" containerID="1e25dc3d1edea490e1c8cd3b444d5b88a6502a90bad3cef321e8416ee23978b5" Feb 02 06:49:49 crc kubenswrapper[4842]: E0202 06:49:49.872056 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e25dc3d1edea490e1c8cd3b444d5b88a6502a90bad3cef321e8416ee23978b5\": container with ID starting with 1e25dc3d1edea490e1c8cd3b444d5b88a6502a90bad3cef321e8416ee23978b5 not found: ID does not exist" containerID="1e25dc3d1edea490e1c8cd3b444d5b88a6502a90bad3cef321e8416ee23978b5" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.872083 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e25dc3d1edea490e1c8cd3b444d5b88a6502a90bad3cef321e8416ee23978b5"} err="failed to get container status \"1e25dc3d1edea490e1c8cd3b444d5b88a6502a90bad3cef321e8416ee23978b5\": rpc error: code = NotFound desc = could not find container \"1e25dc3d1edea490e1c8cd3b444d5b88a6502a90bad3cef321e8416ee23978b5\": container with ID starting with 1e25dc3d1edea490e1c8cd3b444d5b88a6502a90bad3cef321e8416ee23978b5 not found: ID does not exist" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.884134 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrqbw\" (UniqueName: \"kubernetes.io/projected/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-kube-api-access-mrqbw\") pod \"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb\" (UID: \"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb\") " Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.885832 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-catalog-content\") pod \"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb\" (UID: \"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb\") " Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.885886 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-utilities\") pod \"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb\" (UID: \"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb\") " Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.887066 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-utilities" (OuterVolumeSpecName: "utilities") pod "c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" (UID: "c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.893640 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-kube-api-access-mrqbw" (OuterVolumeSpecName: "kube-api-access-mrqbw") pod "c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" (UID: "c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb"). InnerVolumeSpecName "kube-api-access-mrqbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.966563 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" (UID: "c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.989644 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrqbw\" (UniqueName: \"kubernetes.io/projected/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-kube-api-access-mrqbw\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.989690 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.989702 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:50 crc kubenswrapper[4842]: I0202 06:49:50.146096 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l9qkz"] Feb 02 06:49:50 crc kubenswrapper[4842]: I0202 06:49:50.150013 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l9qkz"] Feb 02 06:49:51 crc kubenswrapper[4842]: I0202 06:49:51.444449 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" path="/var/lib/kubelet/pods/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb/volumes" Feb 02 06:49:55 crc kubenswrapper[4842]: I0202 06:49:55.714140 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" podUID="bf91f3e9-19c2-4f18-b129-41aafd1a1264" containerName="oauth-openshift" containerID="cri-o://25634892eeeb42d0ef66d036ba3180352e61cb89dc73ca05e000cddfc7ed5d5f" gracePeriod=15 Feb 02 06:49:55 crc kubenswrapper[4842]: I0202 06:49:55.859392 4842 generic.go:334] "Generic (PLEG): container finished" podID="bf91f3e9-19c2-4f18-b129-41aafd1a1264" containerID="25634892eeeb42d0ef66d036ba3180352e61cb89dc73ca05e000cddfc7ed5d5f" exitCode=0 Feb 02 06:49:55 crc kubenswrapper[4842]: I0202 06:49:55.859462 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" event={"ID":"bf91f3e9-19c2-4f18-b129-41aafd1a1264","Type":"ContainerDied","Data":"25634892eeeb42d0ef66d036ba3180352e61cb89dc73ca05e000cddfc7ed5d5f"} Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.288148 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.474064 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-service-ca\") pod \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.474136 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-provider-selection\") pod \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.474175 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-serving-cert\") pod \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.474210 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-error\") pod \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.474295 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-audit-policies\") pod \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.474335 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-login\") pod \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.475087 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-idp-0-file-data\") pod \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.475199 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-cliconfig\") pod \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.475304 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmndw\" (UniqueName: \"kubernetes.io/projected/bf91f3e9-19c2-4f18-b129-41aafd1a1264-kube-api-access-bmndw\") pod \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.475423 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-ocp-branding-template\") pod \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.475490 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-router-certs\") pod \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.475541 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bf91f3e9-19c2-4f18-b129-41aafd1a1264-audit-dir\") pod \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.475595 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-session\") pod \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.475649 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-trusted-ca-bundle\") pod \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.475881 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "bf91f3e9-19c2-4f18-b129-41aafd1a1264" (UID: "bf91f3e9-19c2-4f18-b129-41aafd1a1264"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.476290 4842 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.476412 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "bf91f3e9-19c2-4f18-b129-41aafd1a1264" (UID: "bf91f3e9-19c2-4f18-b129-41aafd1a1264"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.475826 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "bf91f3e9-19c2-4f18-b129-41aafd1a1264" (UID: "bf91f3e9-19c2-4f18-b129-41aafd1a1264"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.476510 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf91f3e9-19c2-4f18-b129-41aafd1a1264-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "bf91f3e9-19c2-4f18-b129-41aafd1a1264" (UID: "bf91f3e9-19c2-4f18-b129-41aafd1a1264"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.476650 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "bf91f3e9-19c2-4f18-b129-41aafd1a1264" (UID: "bf91f3e9-19c2-4f18-b129-41aafd1a1264"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.483761 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "bf91f3e9-19c2-4f18-b129-41aafd1a1264" (UID: "bf91f3e9-19c2-4f18-b129-41aafd1a1264"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.483993 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "bf91f3e9-19c2-4f18-b129-41aafd1a1264" (UID: "bf91f3e9-19c2-4f18-b129-41aafd1a1264"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.484460 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "bf91f3e9-19c2-4f18-b129-41aafd1a1264" (UID: "bf91f3e9-19c2-4f18-b129-41aafd1a1264"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.484622 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf91f3e9-19c2-4f18-b129-41aafd1a1264-kube-api-access-bmndw" (OuterVolumeSpecName: "kube-api-access-bmndw") pod "bf91f3e9-19c2-4f18-b129-41aafd1a1264" (UID: "bf91f3e9-19c2-4f18-b129-41aafd1a1264"). InnerVolumeSpecName "kube-api-access-bmndw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.485187 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "bf91f3e9-19c2-4f18-b129-41aafd1a1264" (UID: "bf91f3e9-19c2-4f18-b129-41aafd1a1264"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.485710 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "bf91f3e9-19c2-4f18-b129-41aafd1a1264" (UID: "bf91f3e9-19c2-4f18-b129-41aafd1a1264"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.486027 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "bf91f3e9-19c2-4f18-b129-41aafd1a1264" (UID: "bf91f3e9-19c2-4f18-b129-41aafd1a1264"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.486486 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "bf91f3e9-19c2-4f18-b129-41aafd1a1264" (UID: "bf91f3e9-19c2-4f18-b129-41aafd1a1264"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.491065 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "bf91f3e9-19c2-4f18-b129-41aafd1a1264" (UID: "bf91f3e9-19c2-4f18-b129-41aafd1a1264"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.577067 4842 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bf91f3e9-19c2-4f18-b129-41aafd1a1264-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.577136 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.577167 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.577197 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.577292 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.577321 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.577348 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.577376 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.577424 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.577452 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.577477 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmndw\" (UniqueName: \"kubernetes.io/projected/bf91f3e9-19c2-4f18-b129-41aafd1a1264-kube-api-access-bmndw\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.577504 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.577531 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.874235 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" event={"ID":"bf91f3e9-19c2-4f18-b129-41aafd1a1264","Type":"ContainerDied","Data":"9e442ed8624abf7c7c008be60f767ce4757519be014cdfd4e95fe98d8969b767"} Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.874290 4842 scope.go:117] "RemoveContainer" containerID="25634892eeeb42d0ef66d036ba3180352e61cb89dc73ca05e000cddfc7ed5d5f" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.874411 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.917431 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hj5sv"] Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.920565 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hj5sv"] Feb 02 06:49:57 crc kubenswrapper[4842]: I0202 06:49:57.445472 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf91f3e9-19c2-4f18-b129-41aafd1a1264" path="/var/lib/kubelet/pods/bf91f3e9-19c2-4f18-b129-41aafd1a1264/volumes" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.704650 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-c494796b-xmpl7"] Feb 02 06:49:59 crc kubenswrapper[4842]: E0202 06:49:59.705543 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" containerName="extract-utilities" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.705565 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" containerName="extract-utilities" Feb 02 06:49:59 crc kubenswrapper[4842]: E0202 06:49:59.705593 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" containerName="extract-content" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.705607 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" containerName="extract-content" Feb 02 06:49:59 crc kubenswrapper[4842]: E0202 06:49:59.705626 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" containerName="registry-server" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.705638 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" containerName="registry-server" Feb 02 06:49:59 crc kubenswrapper[4842]: E0202 06:49:59.705673 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf91f3e9-19c2-4f18-b129-41aafd1a1264" containerName="oauth-openshift" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.705689 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf91f3e9-19c2-4f18-b129-41aafd1a1264" containerName="oauth-openshift" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.705897 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" containerName="registry-server" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.705921 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf91f3e9-19c2-4f18-b129-41aafd1a1264" containerName="oauth-openshift" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.706572 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.713410 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.713907 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.714265 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.714502 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.715170 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.715600 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.716150 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.716797 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.719583 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.719662 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-session\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.719760 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-service-ca\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.719793 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-router-certs\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.719834 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.719873 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-cliconfig\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.719908 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-user-template-error\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.719955 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-serving-cert\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.719989 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.720023 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.720070 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdxj6\" (UniqueName: \"kubernetes.io/projected/a2f32ab9-c38e-4e56-867d-7c1f14d54868-kube-api-access-bdxj6\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.720108 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a2f32ab9-c38e-4e56-867d-7c1f14d54868-audit-policies\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.720142 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2f32ab9-c38e-4e56-867d-7c1f14d54868-audit-dir\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.720174 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-user-template-login\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.722363 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.722921 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.723255 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.723360 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.734712 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.735066 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.740138 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-c494796b-xmpl7"] Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.743435 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.821640 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-user-template-login\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.821731 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.821770 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-session\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.821841 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-service-ca\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.821951 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-router-certs\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.821997 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.822054 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-cliconfig\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.822089 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-user-template-error\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.822139 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-serving-cert\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.822177 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.822210 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.822279 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdxj6\" (UniqueName: \"kubernetes.io/projected/a2f32ab9-c38e-4e56-867d-7c1f14d54868-kube-api-access-bdxj6\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.822317 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a2f32ab9-c38e-4e56-867d-7c1f14d54868-audit-policies\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.822352 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2f32ab9-c38e-4e56-867d-7c1f14d54868-audit-dir\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.822499 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2f32ab9-c38e-4e56-867d-7c1f14d54868-audit-dir\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.824279 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.824304 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-cliconfig\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.824568 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a2f32ab9-c38e-4e56-867d-7c1f14d54868-audit-policies\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.825466 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-service-ca\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.830604 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-serving-cert\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.830712 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.831031 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.831280 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-user-template-error\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.832121 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-router-certs\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.832689 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-session\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.833875 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.834131 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-user-template-login\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.854825 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdxj6\" (UniqueName: \"kubernetes.io/projected/a2f32ab9-c38e-4e56-867d-7c1f14d54868-kube-api-access-bdxj6\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:50:00 crc kubenswrapper[4842]: I0202 06:50:00.041929 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:50:00 crc kubenswrapper[4842]: I0202 06:50:00.616022 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-c494796b-xmpl7"] Feb 02 06:50:00 crc kubenswrapper[4842]: W0202 06:50:00.622209 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2f32ab9_c38e_4e56_867d_7c1f14d54868.slice/crio-a12ba5c9774a48bdfda0313b0e71b0f61667f7d287fad14d1bed7c668076e7ef WatchSource:0}: Error finding container a12ba5c9774a48bdfda0313b0e71b0f61667f7d287fad14d1bed7c668076e7ef: Status 404 returned error can't find the container with id a12ba5c9774a48bdfda0313b0e71b0f61667f7d287fad14d1bed7c668076e7ef Feb 02 06:50:00 crc kubenswrapper[4842]: I0202 06:50:00.914359 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" event={"ID":"a2f32ab9-c38e-4e56-867d-7c1f14d54868","Type":"ContainerStarted","Data":"a12ba5c9774a48bdfda0313b0e71b0f61667f7d287fad14d1bed7c668076e7ef"} Feb 02 06:50:01 crc kubenswrapper[4842]: I0202 06:50:01.921755 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" event={"ID":"a2f32ab9-c38e-4e56-867d-7c1f14d54868","Type":"ContainerStarted","Data":"0750c0dde31751ccbcbdb957d880d44f1e29d4b7a9954705a364d7cb82e7dcbb"} Feb 02 06:50:01 crc kubenswrapper[4842]: I0202 06:50:01.922336 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:50:01 crc kubenswrapper[4842]: I0202 06:50:01.930008 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:50:01 crc kubenswrapper[4842]: I0202 06:50:01.943699 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" podStartSLOduration=31.9436737 podStartE2EDuration="31.9436737s" podCreationTimestamp="2026-02-02 06:49:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:50:01.941894217 +0000 UTC m=+227.319162199" watchObservedRunningTime="2026-02-02 06:50:01.9436737 +0000 UTC m=+227.320941652" Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.143799 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-577b8789bf-xqfmj"] Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.144315 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" podUID="12e4df66-5150-49ad-8fe1-a4c7cd09bb97" containerName="controller-manager" containerID="cri-o://768631107ab27a46c91c5b672c3d2cb93e3ebaca049c2f51e26a2fbebfd55d2a" gracePeriod=30 Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.241274 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd"] Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.241535 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" podUID="81aa66cb-52e6-47c7-a265-f441c27469ab" containerName="route-controller-manager" containerID="cri-o://c16710dc51da216dbe3e32e2e61d1af41762994fc2090d1139fb902be028acba" gracePeriod=30 Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.832331 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.860067 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.897260 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81aa66cb-52e6-47c7-a265-f441c27469ab-config\") pod \"81aa66cb-52e6-47c7-a265-f441c27469ab\" (UID: \"81aa66cb-52e6-47c7-a265-f441c27469ab\") " Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.898583 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81aa66cb-52e6-47c7-a265-f441c27469ab-config" (OuterVolumeSpecName: "config") pod "81aa66cb-52e6-47c7-a265-f441c27469ab" (UID: "81aa66cb-52e6-47c7-a265-f441c27469ab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.937423 4842 generic.go:334] "Generic (PLEG): container finished" podID="81aa66cb-52e6-47c7-a265-f441c27469ab" containerID="c16710dc51da216dbe3e32e2e61d1af41762994fc2090d1139fb902be028acba" exitCode=0 Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.937474 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" event={"ID":"81aa66cb-52e6-47c7-a265-f441c27469ab","Type":"ContainerDied","Data":"c16710dc51da216dbe3e32e2e61d1af41762994fc2090d1139fb902be028acba"} Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.937496 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" event={"ID":"81aa66cb-52e6-47c7-a265-f441c27469ab","Type":"ContainerDied","Data":"9e72e571d2546b7b55a841837009db7f12ec675858678bd32edb3b3f5e9f3847"} Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.937512 4842 scope.go:117] "RemoveContainer" containerID="c16710dc51da216dbe3e32e2e61d1af41762994fc2090d1139fb902be028acba" Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.937605 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.942065 4842 generic.go:334] "Generic (PLEG): container finished" podID="12e4df66-5150-49ad-8fe1-a4c7cd09bb97" containerID="768631107ab27a46c91c5b672c3d2cb93e3ebaca049c2f51e26a2fbebfd55d2a" exitCode=0 Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.942109 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" event={"ID":"12e4df66-5150-49ad-8fe1-a4c7cd09bb97","Type":"ContainerDied","Data":"768631107ab27a46c91c5b672c3d2cb93e3ebaca049c2f51e26a2fbebfd55d2a"} Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.942139 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" event={"ID":"12e4df66-5150-49ad-8fe1-a4c7cd09bb97","Type":"ContainerDied","Data":"009f0767a9c6d25730471d2699cc1667960fae6b41aa164b180b1803f5c237c8"} Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.942192 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.953941 4842 scope.go:117] "RemoveContainer" containerID="c16710dc51da216dbe3e32e2e61d1af41762994fc2090d1139fb902be028acba" Feb 02 06:50:04 crc kubenswrapper[4842]: E0202 06:50:04.954499 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c16710dc51da216dbe3e32e2e61d1af41762994fc2090d1139fb902be028acba\": container with ID starting with c16710dc51da216dbe3e32e2e61d1af41762994fc2090d1139fb902be028acba not found: ID does not exist" containerID="c16710dc51da216dbe3e32e2e61d1af41762994fc2090d1139fb902be028acba" Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.954537 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c16710dc51da216dbe3e32e2e61d1af41762994fc2090d1139fb902be028acba"} err="failed to get container status \"c16710dc51da216dbe3e32e2e61d1af41762994fc2090d1139fb902be028acba\": rpc error: code = NotFound desc = could not find container \"c16710dc51da216dbe3e32e2e61d1af41762994fc2090d1139fb902be028acba\": container with ID starting with c16710dc51da216dbe3e32e2e61d1af41762994fc2090d1139fb902be028acba not found: ID does not exist" Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.954558 4842 scope.go:117] "RemoveContainer" containerID="768631107ab27a46c91c5b672c3d2cb93e3ebaca049c2f51e26a2fbebfd55d2a" Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.969278 4842 scope.go:117] "RemoveContainer" containerID="768631107ab27a46c91c5b672c3d2cb93e3ebaca049c2f51e26a2fbebfd55d2a" Feb 02 06:50:04 crc kubenswrapper[4842]: E0202 06:50:04.969689 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"768631107ab27a46c91c5b672c3d2cb93e3ebaca049c2f51e26a2fbebfd55d2a\": container with ID starting with 768631107ab27a46c91c5b672c3d2cb93e3ebaca049c2f51e26a2fbebfd55d2a not found: ID does not exist" containerID="768631107ab27a46c91c5b672c3d2cb93e3ebaca049c2f51e26a2fbebfd55d2a" Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.969709 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"768631107ab27a46c91c5b672c3d2cb93e3ebaca049c2f51e26a2fbebfd55d2a"} err="failed to get container status \"768631107ab27a46c91c5b672c3d2cb93e3ebaca049c2f51e26a2fbebfd55d2a\": rpc error: code = NotFound desc = could not find container \"768631107ab27a46c91c5b672c3d2cb93e3ebaca049c2f51e26a2fbebfd55d2a\": container with ID starting with 768631107ab27a46c91c5b672c3d2cb93e3ebaca049c2f51e26a2fbebfd55d2a not found: ID does not exist" Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.998243 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81aa66cb-52e6-47c7-a265-f441c27469ab-client-ca\") pod \"81aa66cb-52e6-47c7-a265-f441c27469ab\" (UID: \"81aa66cb-52e6-47c7-a265-f441c27469ab\") " Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.998370 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjb8m\" (UniqueName: \"kubernetes.io/projected/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-kube-api-access-xjb8m\") pod \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.998397 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-client-ca\") pod \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.998417 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81aa66cb-52e6-47c7-a265-f441c27469ab-serving-cert\") pod \"81aa66cb-52e6-47c7-a265-f441c27469ab\" (UID: \"81aa66cb-52e6-47c7-a265-f441c27469ab\") " Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.998438 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-config\") pod \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.998469 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b58hc\" (UniqueName: \"kubernetes.io/projected/81aa66cb-52e6-47c7-a265-f441c27469ab-kube-api-access-b58hc\") pod \"81aa66cb-52e6-47c7-a265-f441c27469ab\" (UID: \"81aa66cb-52e6-47c7-a265-f441c27469ab\") " Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.998510 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-serving-cert\") pod \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.998529 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-proxy-ca-bundles\") pod \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.998697 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81aa66cb-52e6-47c7-a265-f441c27469ab-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.999347 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "12e4df66-5150-49ad-8fe1-a4c7cd09bb97" (UID: "12e4df66-5150-49ad-8fe1-a4c7cd09bb97"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.999958 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81aa66cb-52e6-47c7-a265-f441c27469ab-client-ca" (OuterVolumeSpecName: "client-ca") pod "81aa66cb-52e6-47c7-a265-f441c27469ab" (UID: "81aa66cb-52e6-47c7-a265-f441c27469ab"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.000396 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-client-ca" (OuterVolumeSpecName: "client-ca") pod "12e4df66-5150-49ad-8fe1-a4c7cd09bb97" (UID: "12e4df66-5150-49ad-8fe1-a4c7cd09bb97"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.000764 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-config" (OuterVolumeSpecName: "config") pod "12e4df66-5150-49ad-8fe1-a4c7cd09bb97" (UID: "12e4df66-5150-49ad-8fe1-a4c7cd09bb97"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.004170 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81aa66cb-52e6-47c7-a265-f441c27469ab-kube-api-access-b58hc" (OuterVolumeSpecName: "kube-api-access-b58hc") pod "81aa66cb-52e6-47c7-a265-f441c27469ab" (UID: "81aa66cb-52e6-47c7-a265-f441c27469ab"). InnerVolumeSpecName "kube-api-access-b58hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.004588 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-kube-api-access-xjb8m" (OuterVolumeSpecName: "kube-api-access-xjb8m") pod "12e4df66-5150-49ad-8fe1-a4c7cd09bb97" (UID: "12e4df66-5150-49ad-8fe1-a4c7cd09bb97"). InnerVolumeSpecName "kube-api-access-xjb8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.005508 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81aa66cb-52e6-47c7-a265-f441c27469ab-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "81aa66cb-52e6-47c7-a265-f441c27469ab" (UID: "81aa66cb-52e6-47c7-a265-f441c27469ab"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.005771 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "12e4df66-5150-49ad-8fe1-a4c7cd09bb97" (UID: "12e4df66-5150-49ad-8fe1-a4c7cd09bb97"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.099834 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjb8m\" (UniqueName: \"kubernetes.io/projected/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-kube-api-access-xjb8m\") on node \"crc\" DevicePath \"\"" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.099885 4842 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.099898 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81aa66cb-52e6-47c7-a265-f441c27469ab-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.099908 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b58hc\" (UniqueName: \"kubernetes.io/projected/81aa66cb-52e6-47c7-a265-f441c27469ab-kube-api-access-b58hc\") on node \"crc\" DevicePath \"\"" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.099917 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.099926 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.099936 4842 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.099961 4842 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81aa66cb-52e6-47c7-a265-f441c27469ab-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.301025 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-577b8789bf-xqfmj"] Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.304674 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-577b8789bf-xqfmj"] Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.311173 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd"] Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.316691 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd"] Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.440860 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12e4df66-5150-49ad-8fe1-a4c7cd09bb97" path="/var/lib/kubelet/pods/12e4df66-5150-49ad-8fe1-a4c7cd09bb97/volumes" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.441522 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81aa66cb-52e6-47c7-a265-f441c27469ab" path="/var/lib/kubelet/pods/81aa66cb-52e6-47c7-a265-f441c27469ab/volumes" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.714087 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bdd884f8b-p6pzq"] Feb 02 06:50:05 crc kubenswrapper[4842]: E0202 06:50:05.714341 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81aa66cb-52e6-47c7-a265-f441c27469ab" containerName="route-controller-manager" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.714356 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="81aa66cb-52e6-47c7-a265-f441c27469ab" containerName="route-controller-manager" Feb 02 06:50:05 crc kubenswrapper[4842]: E0202 06:50:05.714372 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12e4df66-5150-49ad-8fe1-a4c7cd09bb97" containerName="controller-manager" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.714381 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="12e4df66-5150-49ad-8fe1-a4c7cd09bb97" containerName="controller-manager" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.714495 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="12e4df66-5150-49ad-8fe1-a4c7cd09bb97" containerName="controller-manager" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.714508 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="81aa66cb-52e6-47c7-a265-f441c27469ab" containerName="route-controller-manager" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.714874 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.718275 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.718534 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.719410 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.719569 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.720052 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.720535 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.723113 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq"] Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.723950 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.725998 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bdd884f8b-p6pzq"] Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.728060 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.728078 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.728452 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.728554 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.728625 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.728809 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq"] Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.728843 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.728906 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.912834 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d04fc416-09dd-4101-b594-09adf0fca345-serving-cert\") pod \"controller-manager-bdd884f8b-p6pzq\" (UID: \"d04fc416-09dd-4101-b594-09adf0fca345\") " pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.913144 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d04fc416-09dd-4101-b594-09adf0fca345-proxy-ca-bundles\") pod \"controller-manager-bdd884f8b-p6pzq\" (UID: \"d04fc416-09dd-4101-b594-09adf0fca345\") " pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.913279 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8chlp\" (UniqueName: \"kubernetes.io/projected/d04fc416-09dd-4101-b594-09adf0fca345-kube-api-access-8chlp\") pod \"controller-manager-bdd884f8b-p6pzq\" (UID: \"d04fc416-09dd-4101-b594-09adf0fca345\") " pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.913488 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dadf3560-132b-4d19-b532-2cfb01019ca2-serving-cert\") pod \"route-controller-manager-86d7677bf-bz6nq\" (UID: \"dadf3560-132b-4d19-b532-2cfb01019ca2\") " pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.913526 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d04fc416-09dd-4101-b594-09adf0fca345-config\") pod \"controller-manager-bdd884f8b-p6pzq\" (UID: \"d04fc416-09dd-4101-b594-09adf0fca345\") " pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.913562 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dadf3560-132b-4d19-b532-2cfb01019ca2-config\") pod \"route-controller-manager-86d7677bf-bz6nq\" (UID: \"dadf3560-132b-4d19-b532-2cfb01019ca2\") " pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.913648 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm8h5\" (UniqueName: \"kubernetes.io/projected/dadf3560-132b-4d19-b532-2cfb01019ca2-kube-api-access-zm8h5\") pod \"route-controller-manager-86d7677bf-bz6nq\" (UID: \"dadf3560-132b-4d19-b532-2cfb01019ca2\") " pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.913682 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d04fc416-09dd-4101-b594-09adf0fca345-client-ca\") pod \"controller-manager-bdd884f8b-p6pzq\" (UID: \"d04fc416-09dd-4101-b594-09adf0fca345\") " pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.913714 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dadf3560-132b-4d19-b532-2cfb01019ca2-client-ca\") pod \"route-controller-manager-86d7677bf-bz6nq\" (UID: \"dadf3560-132b-4d19-b532-2cfb01019ca2\") " pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.015034 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dadf3560-132b-4d19-b532-2cfb01019ca2-serving-cert\") pod \"route-controller-manager-86d7677bf-bz6nq\" (UID: \"dadf3560-132b-4d19-b532-2cfb01019ca2\") " pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.015069 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d04fc416-09dd-4101-b594-09adf0fca345-config\") pod \"controller-manager-bdd884f8b-p6pzq\" (UID: \"d04fc416-09dd-4101-b594-09adf0fca345\") " pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.015092 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dadf3560-132b-4d19-b532-2cfb01019ca2-config\") pod \"route-controller-manager-86d7677bf-bz6nq\" (UID: \"dadf3560-132b-4d19-b532-2cfb01019ca2\") " pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.015126 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm8h5\" (UniqueName: \"kubernetes.io/projected/dadf3560-132b-4d19-b532-2cfb01019ca2-kube-api-access-zm8h5\") pod \"route-controller-manager-86d7677bf-bz6nq\" (UID: \"dadf3560-132b-4d19-b532-2cfb01019ca2\") " pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.015140 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d04fc416-09dd-4101-b594-09adf0fca345-client-ca\") pod \"controller-manager-bdd884f8b-p6pzq\" (UID: \"d04fc416-09dd-4101-b594-09adf0fca345\") " pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.015161 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dadf3560-132b-4d19-b532-2cfb01019ca2-client-ca\") pod \"route-controller-manager-86d7677bf-bz6nq\" (UID: \"dadf3560-132b-4d19-b532-2cfb01019ca2\") " pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.015181 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d04fc416-09dd-4101-b594-09adf0fca345-serving-cert\") pod \"controller-manager-bdd884f8b-p6pzq\" (UID: \"d04fc416-09dd-4101-b594-09adf0fca345\") " pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.015198 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d04fc416-09dd-4101-b594-09adf0fca345-proxy-ca-bundles\") pod \"controller-manager-bdd884f8b-p6pzq\" (UID: \"d04fc416-09dd-4101-b594-09adf0fca345\") " pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.015261 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8chlp\" (UniqueName: \"kubernetes.io/projected/d04fc416-09dd-4101-b594-09adf0fca345-kube-api-access-8chlp\") pod \"controller-manager-bdd884f8b-p6pzq\" (UID: \"d04fc416-09dd-4101-b594-09adf0fca345\") " pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.016517 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d04fc416-09dd-4101-b594-09adf0fca345-proxy-ca-bundles\") pod \"controller-manager-bdd884f8b-p6pzq\" (UID: \"d04fc416-09dd-4101-b594-09adf0fca345\") " pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.017364 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d04fc416-09dd-4101-b594-09adf0fca345-client-ca\") pod \"controller-manager-bdd884f8b-p6pzq\" (UID: \"d04fc416-09dd-4101-b594-09adf0fca345\") " pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.017511 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dadf3560-132b-4d19-b532-2cfb01019ca2-client-ca\") pod \"route-controller-manager-86d7677bf-bz6nq\" (UID: \"dadf3560-132b-4d19-b532-2cfb01019ca2\") " pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.017649 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d04fc416-09dd-4101-b594-09adf0fca345-config\") pod \"controller-manager-bdd884f8b-p6pzq\" (UID: \"d04fc416-09dd-4101-b594-09adf0fca345\") " pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.020374 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dadf3560-132b-4d19-b532-2cfb01019ca2-serving-cert\") pod \"route-controller-manager-86d7677bf-bz6nq\" (UID: \"dadf3560-132b-4d19-b532-2cfb01019ca2\") " pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.020537 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dadf3560-132b-4d19-b532-2cfb01019ca2-config\") pod \"route-controller-manager-86d7677bf-bz6nq\" (UID: \"dadf3560-132b-4d19-b532-2cfb01019ca2\") " pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.021337 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d04fc416-09dd-4101-b594-09adf0fca345-serving-cert\") pod \"controller-manager-bdd884f8b-p6pzq\" (UID: \"d04fc416-09dd-4101-b594-09adf0fca345\") " pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.037663 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8chlp\" (UniqueName: \"kubernetes.io/projected/d04fc416-09dd-4101-b594-09adf0fca345-kube-api-access-8chlp\") pod \"controller-manager-bdd884f8b-p6pzq\" (UID: \"d04fc416-09dd-4101-b594-09adf0fca345\") " pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.040533 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm8h5\" (UniqueName: \"kubernetes.io/projected/dadf3560-132b-4d19-b532-2cfb01019ca2-kube-api-access-zm8h5\") pod \"route-controller-manager-86d7677bf-bz6nq\" (UID: \"dadf3560-132b-4d19-b532-2cfb01019ca2\") " pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.091719 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.096963 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.548346 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq"] Feb 02 06:50:06 crc kubenswrapper[4842]: W0202 06:50:06.550696 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddadf3560_132b_4d19_b532_2cfb01019ca2.slice/crio-badb7447f73fab560bc0a46616f5c5a0d6a83afd50381d73d9078aac5d0d98a4 WatchSource:0}: Error finding container badb7447f73fab560bc0a46616f5c5a0d6a83afd50381d73d9078aac5d0d98a4: Status 404 returned error can't find the container with id badb7447f73fab560bc0a46616f5c5a0d6a83afd50381d73d9078aac5d0d98a4 Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.639657 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bdd884f8b-p6pzq"] Feb 02 06:50:06 crc kubenswrapper[4842]: W0202 06:50:06.644507 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd04fc416_09dd_4101_b594_09adf0fca345.slice/crio-f73107010025ad47264ffcaa5886d0f57784a519a5775a4c96b98c90644f7b78 WatchSource:0}: Error finding container f73107010025ad47264ffcaa5886d0f57784a519a5775a4c96b98c90644f7b78: Status 404 returned error can't find the container with id f73107010025ad47264ffcaa5886d0f57784a519a5775a4c96b98c90644f7b78 Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.955923 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" event={"ID":"dadf3560-132b-4d19-b532-2cfb01019ca2","Type":"ContainerStarted","Data":"f39efadb06d27e43a6f28be0a797887f10e5c3790fa6867dee0a09ae275ad961"} Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.955975 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" event={"ID":"dadf3560-132b-4d19-b532-2cfb01019ca2","Type":"ContainerStarted","Data":"badb7447f73fab560bc0a46616f5c5a0d6a83afd50381d73d9078aac5d0d98a4"} Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.957052 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.959150 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" event={"ID":"d04fc416-09dd-4101-b594-09adf0fca345","Type":"ContainerStarted","Data":"48928bfbbf05285bcb0191927a87bcda75eacbe5fbd97b3c0f47b7d6a51f5079"} Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.959238 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" event={"ID":"d04fc416-09dd-4101-b594-09adf0fca345","Type":"ContainerStarted","Data":"f73107010025ad47264ffcaa5886d0f57784a519a5775a4c96b98c90644f7b78"} Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.959471 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.965628 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.972491 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" podStartSLOduration=2.972480846 podStartE2EDuration="2.972480846s" podCreationTimestamp="2026-02-02 06:50:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:50:06.96978041 +0000 UTC m=+232.347048322" watchObservedRunningTime="2026-02-02 06:50:06.972480846 +0000 UTC m=+232.349748758" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.989250 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" podStartSLOduration=2.989231453 podStartE2EDuration="2.989231453s" podCreationTimestamp="2026-02-02 06:50:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:50:06.988722661 +0000 UTC m=+232.365990583" watchObservedRunningTime="2026-02-02 06:50:06.989231453 +0000 UTC m=+232.366499375" Feb 02 06:50:07 crc kubenswrapper[4842]: I0202 06:50:07.423405 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.771958 4842 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.774728 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe" gracePeriod=15 Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.774817 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518" gracePeriod=15 Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.774793 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7" gracePeriod=15 Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.774845 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee" gracePeriod=15 Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.774942 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5" gracePeriod=15 Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.775837 4842 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 06:50:16 crc kubenswrapper[4842]: E0202 06:50:16.776184 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.776207 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 06:50:16 crc kubenswrapper[4842]: E0202 06:50:16.776250 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.776264 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 06:50:16 crc kubenswrapper[4842]: E0202 06:50:16.776287 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.776302 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 06:50:16 crc kubenswrapper[4842]: E0202 06:50:16.776327 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.776338 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 06:50:16 crc kubenswrapper[4842]: E0202 06:50:16.776357 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.776369 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 06:50:16 crc kubenswrapper[4842]: E0202 06:50:16.776385 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.776397 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 06:50:16 crc kubenswrapper[4842]: E0202 06:50:16.776412 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.776424 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.776596 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.776618 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.776636 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.776653 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.776675 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.776992 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.781320 4842 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.783046 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.788807 4842 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 02 06:50:16 crc kubenswrapper[4842]: E0202 06:50:16.851680 4842 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.169:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.902051 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.902426 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.902483 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.902529 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.902562 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.902601 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.902635 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.902663 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.003993 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.004058 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.004104 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.004153 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.004172 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.004243 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.004257 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.004289 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.004332 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.004296 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.004353 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.004306 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.004417 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.004463 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.004500 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.004463 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.021366 4842 generic.go:334] "Generic (PLEG): container finished" podID="ea82b6bc-5c1e-496e-8501-45fdb7220cbb" containerID="240ef4d9719e0e125f80aaba75a288ed11f634bda46b01e82f75011b4bb97529" exitCode=0 Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.021479 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ea82b6bc-5c1e-496e-8501-45fdb7220cbb","Type":"ContainerDied","Data":"240ef4d9719e0e125f80aaba75a288ed11f634bda46b01e82f75011b4bb97529"} Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.024494 4842 status_manager.go:851] "Failed to get status for pod" podUID="ea82b6bc-5c1e-496e-8501-45fdb7220cbb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.025514 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.027105 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.028022 4842 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7" exitCode=0 Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.028055 4842 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518" exitCode=0 Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.028072 4842 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5" exitCode=0 Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.028086 4842 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee" exitCode=2 Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.028134 4842 scope.go:117] "RemoveContainer" containerID="628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.152937 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: W0202 06:50:17.172284 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-5e505f943ae8934267cdf62782f888bd3e63f4f4294207bc4cff73ed3325628c WatchSource:0}: Error finding container 5e505f943ae8934267cdf62782f888bd3e63f4f4294207bc4cff73ed3325628c: Status 404 returned error can't find the container with id 5e505f943ae8934267cdf62782f888bd3e63f4f4294207bc4cff73ed3325628c Feb 02 06:50:17 crc kubenswrapper[4842]: E0202 06:50:17.179398 4842 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.169:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18905b47b9f6be2f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 06:50:17.177366063 +0000 UTC m=+242.554634005,LastTimestamp:2026-02-02 06:50:17.177366063 +0000 UTC m=+242.554634005,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 06:50:18 crc kubenswrapper[4842]: I0202 06:50:18.037983 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"52658e1427cd8c9c3ef6d07e7765f9b82d90bd1dc21508676eb83936020b6106"} Feb 02 06:50:18 crc kubenswrapper[4842]: I0202 06:50:18.038462 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"5e505f943ae8934267cdf62782f888bd3e63f4f4294207bc4cff73ed3325628c"} Feb 02 06:50:18 crc kubenswrapper[4842]: E0202 06:50:18.039411 4842 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.169:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:18 crc kubenswrapper[4842]: I0202 06:50:18.039544 4842 status_manager.go:851] "Failed to get status for pod" podUID="ea82b6bc-5c1e-496e-8501-45fdb7220cbb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:18 crc kubenswrapper[4842]: I0202 06:50:18.042613 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 06:50:18 crc kubenswrapper[4842]: I0202 06:50:18.542576 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 06:50:18 crc kubenswrapper[4842]: I0202 06:50:18.543777 4842 status_manager.go:851] "Failed to get status for pod" podUID="ea82b6bc-5c1e-496e-8501-45fdb7220cbb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:18 crc kubenswrapper[4842]: I0202 06:50:18.626879 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-kubelet-dir\") pod \"ea82b6bc-5c1e-496e-8501-45fdb7220cbb\" (UID: \"ea82b6bc-5c1e-496e-8501-45fdb7220cbb\") " Feb 02 06:50:18 crc kubenswrapper[4842]: I0202 06:50:18.626933 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-kube-api-access\") pod \"ea82b6bc-5c1e-496e-8501-45fdb7220cbb\" (UID: \"ea82b6bc-5c1e-496e-8501-45fdb7220cbb\") " Feb 02 06:50:18 crc kubenswrapper[4842]: I0202 06:50:18.626967 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ea82b6bc-5c1e-496e-8501-45fdb7220cbb" (UID: "ea82b6bc-5c1e-496e-8501-45fdb7220cbb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:50:18 crc kubenswrapper[4842]: I0202 06:50:18.627043 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-var-lock\") pod \"ea82b6bc-5c1e-496e-8501-45fdb7220cbb\" (UID: \"ea82b6bc-5c1e-496e-8501-45fdb7220cbb\") " Feb 02 06:50:18 crc kubenswrapper[4842]: I0202 06:50:18.627143 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-var-lock" (OuterVolumeSpecName: "var-lock") pod "ea82b6bc-5c1e-496e-8501-45fdb7220cbb" (UID: "ea82b6bc-5c1e-496e-8501-45fdb7220cbb"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:50:18 crc kubenswrapper[4842]: I0202 06:50:18.627307 4842 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-var-lock\") on node \"crc\" DevicePath \"\"" Feb 02 06:50:18 crc kubenswrapper[4842]: I0202 06:50:18.627322 4842 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 06:50:18 crc kubenswrapper[4842]: I0202 06:50:18.634486 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ea82b6bc-5c1e-496e-8501-45fdb7220cbb" (UID: "ea82b6bc-5c1e-496e-8501-45fdb7220cbb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:50:18 crc kubenswrapper[4842]: I0202 06:50:18.729128 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.051476 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ea82b6bc-5c1e-496e-8501-45fdb7220cbb","Type":"ContainerDied","Data":"0552a9b96b9d22768298700a35eacdb617d371443cdcdb1aba68d660647a3200"} Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.051745 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0552a9b96b9d22768298700a35eacdb617d371443cdcdb1aba68d660647a3200" Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.051595 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.157819 4842 status_manager.go:851] "Failed to get status for pod" podUID="ea82b6bc-5c1e-496e-8501-45fdb7220cbb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.163257 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.164444 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.165103 4842 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.165690 4842 status_manager.go:851] "Failed to get status for pod" podUID="ea82b6bc-5c1e-496e-8501-45fdb7220cbb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.251722 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.251879 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.251907 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.251939 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.252005 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.252088 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.252296 4842 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.252318 4842 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.252335 4842 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.446782 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 02 06:50:19 crc kubenswrapper[4842]: E0202 06:50:19.767759 4842 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.169:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18905b47b9f6be2f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 06:50:17.177366063 +0000 UTC m=+242.554634005,LastTimestamp:2026-02-02 06:50:17.177366063 +0000 UTC m=+242.554634005,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.061181 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.062037 4842 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe" exitCode=0 Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.062179 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.062238 4842 scope.go:117] "RemoveContainer" containerID="a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.062870 4842 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.063267 4842 status_manager.go:851] "Failed to get status for pod" podUID="ea82b6bc-5c1e-496e-8501-45fdb7220cbb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.067063 4842 status_manager.go:851] "Failed to get status for pod" podUID="ea82b6bc-5c1e-496e-8501-45fdb7220cbb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.067378 4842 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.088506 4842 scope.go:117] "RemoveContainer" containerID="d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.110953 4842 scope.go:117] "RemoveContainer" containerID="9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.127034 4842 scope.go:117] "RemoveContainer" containerID="231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.146053 4842 scope.go:117] "RemoveContainer" containerID="7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.162437 4842 scope.go:117] "RemoveContainer" containerID="3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.195149 4842 scope.go:117] "RemoveContainer" containerID="a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7" Feb 02 06:50:20 crc kubenswrapper[4842]: E0202 06:50:20.195714 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\": container with ID starting with a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7 not found: ID does not exist" containerID="a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.195786 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7"} err="failed to get container status \"a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\": rpc error: code = NotFound desc = could not find container \"a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\": container with ID starting with a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7 not found: ID does not exist" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.195826 4842 scope.go:117] "RemoveContainer" containerID="d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518" Feb 02 06:50:20 crc kubenswrapper[4842]: E0202 06:50:20.196125 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\": container with ID starting with d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518 not found: ID does not exist" containerID="d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.196162 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518"} err="failed to get container status \"d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\": rpc error: code = NotFound desc = could not find container \"d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\": container with ID starting with d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518 not found: ID does not exist" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.196180 4842 scope.go:117] "RemoveContainer" containerID="9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5" Feb 02 06:50:20 crc kubenswrapper[4842]: E0202 06:50:20.196485 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\": container with ID starting with 9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5 not found: ID does not exist" containerID="9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.196528 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5"} err="failed to get container status \"9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\": rpc error: code = NotFound desc = could not find container \"9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\": container with ID starting with 9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5 not found: ID does not exist" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.196564 4842 scope.go:117] "RemoveContainer" containerID="231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee" Feb 02 06:50:20 crc kubenswrapper[4842]: E0202 06:50:20.196841 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\": container with ID starting with 231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee not found: ID does not exist" containerID="231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.196876 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee"} err="failed to get container status \"231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\": rpc error: code = NotFound desc = could not find container \"231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\": container with ID starting with 231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee not found: ID does not exist" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.196895 4842 scope.go:117] "RemoveContainer" containerID="7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe" Feb 02 06:50:20 crc kubenswrapper[4842]: E0202 06:50:20.197076 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\": container with ID starting with 7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe not found: ID does not exist" containerID="7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.197102 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe"} err="failed to get container status \"7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\": rpc error: code = NotFound desc = could not find container \"7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\": container with ID starting with 7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe not found: ID does not exist" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.197118 4842 scope.go:117] "RemoveContainer" containerID="3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45" Feb 02 06:50:20 crc kubenswrapper[4842]: E0202 06:50:20.197567 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\": container with ID starting with 3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45 not found: ID does not exist" containerID="3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.197702 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45"} err="failed to get container status \"3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\": rpc error: code = NotFound desc = could not find container \"3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\": container with ID starting with 3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45 not found: ID does not exist" Feb 02 06:50:25 crc kubenswrapper[4842]: I0202 06:50:25.446829 4842 status_manager.go:851] "Failed to get status for pod" podUID="ea82b6bc-5c1e-496e-8501-45fdb7220cbb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:26 crc kubenswrapper[4842]: E0202 06:50:26.189993 4842 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:26 crc kubenswrapper[4842]: E0202 06:50:26.190496 4842 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:26 crc kubenswrapper[4842]: E0202 06:50:26.190940 4842 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:26 crc kubenswrapper[4842]: E0202 06:50:26.191426 4842 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:26 crc kubenswrapper[4842]: E0202 06:50:26.192065 4842 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:26 crc kubenswrapper[4842]: I0202 06:50:26.192116 4842 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 02 06:50:26 crc kubenswrapper[4842]: E0202 06:50:26.192625 4842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="200ms" Feb 02 06:50:26 crc kubenswrapper[4842]: E0202 06:50:26.393373 4842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="400ms" Feb 02 06:50:26 crc kubenswrapper[4842]: E0202 06:50:26.794619 4842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="800ms" Feb 02 06:50:27 crc kubenswrapper[4842]: E0202 06:50:27.596126 4842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="1.6s" Feb 02 06:50:29 crc kubenswrapper[4842]: E0202 06:50:29.198404 4842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="3.2s" Feb 02 06:50:29 crc kubenswrapper[4842]: E0202 06:50:29.770272 4842 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.169:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18905b47b9f6be2f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 06:50:17.177366063 +0000 UTC m=+242.554634005,LastTimestamp:2026-02-02 06:50:17.177366063 +0000 UTC m=+242.554634005,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 06:50:30 crc kubenswrapper[4842]: I0202 06:50:30.135860 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 06:50:30 crc kubenswrapper[4842]: I0202 06:50:30.136317 4842 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf" exitCode=1 Feb 02 06:50:30 crc kubenswrapper[4842]: I0202 06:50:30.136447 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf"} Feb 02 06:50:30 crc kubenswrapper[4842]: I0202 06:50:30.137421 4842 scope.go:117] "RemoveContainer" containerID="2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf" Feb 02 06:50:30 crc kubenswrapper[4842]: I0202 06:50:30.137755 4842 status_manager.go:851] "Failed to get status for pod" podUID="ea82b6bc-5c1e-496e-8501-45fdb7220cbb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:30 crc kubenswrapper[4842]: I0202 06:50:30.138737 4842 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:30 crc kubenswrapper[4842]: I0202 06:50:30.433297 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:30 crc kubenswrapper[4842]: I0202 06:50:30.434809 4842 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:30 crc kubenswrapper[4842]: I0202 06:50:30.435405 4842 status_manager.go:851] "Failed to get status for pod" podUID="ea82b6bc-5c1e-496e-8501-45fdb7220cbb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:30 crc kubenswrapper[4842]: I0202 06:50:30.448948 4842 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a52fecd8-6250-4bb6-bd2d-5f882a228ccd" Feb 02 06:50:30 crc kubenswrapper[4842]: I0202 06:50:30.448981 4842 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a52fecd8-6250-4bb6-bd2d-5f882a228ccd" Feb 02 06:50:30 crc kubenswrapper[4842]: E0202 06:50:30.449404 4842 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:30 crc kubenswrapper[4842]: I0202 06:50:30.449986 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:30 crc kubenswrapper[4842]: W0202 06:50:30.490988 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-1e856f5f2a45e09ccbd846b496bf0bb33f663882b3d6bb00a5ebe1f412d8ee63 WatchSource:0}: Error finding container 1e856f5f2a45e09ccbd846b496bf0bb33f663882b3d6bb00a5ebe1f412d8ee63: Status 404 returned error can't find the container with id 1e856f5f2a45e09ccbd846b496bf0bb33f663882b3d6bb00a5ebe1f412d8ee63 Feb 02 06:50:31 crc kubenswrapper[4842]: I0202 06:50:31.148006 4842 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="e239dbed7987b10b04ec8caef7e2da3b79cf6b6d24948f7583a18830832c0b2b" exitCode=0 Feb 02 06:50:31 crc kubenswrapper[4842]: I0202 06:50:31.148152 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"e239dbed7987b10b04ec8caef7e2da3b79cf6b6d24948f7583a18830832c0b2b"} Feb 02 06:50:31 crc kubenswrapper[4842]: I0202 06:50:31.148418 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1e856f5f2a45e09ccbd846b496bf0bb33f663882b3d6bb00a5ebe1f412d8ee63"} Feb 02 06:50:31 crc kubenswrapper[4842]: I0202 06:50:31.148986 4842 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a52fecd8-6250-4bb6-bd2d-5f882a228ccd" Feb 02 06:50:31 crc kubenswrapper[4842]: I0202 06:50:31.149040 4842 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a52fecd8-6250-4bb6-bd2d-5f882a228ccd" Feb 02 06:50:31 crc kubenswrapper[4842]: I0202 06:50:31.149533 4842 status_manager.go:851] "Failed to get status for pod" podUID="ea82b6bc-5c1e-496e-8501-45fdb7220cbb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:31 crc kubenswrapper[4842]: E0202 06:50:31.149849 4842 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:31 crc kubenswrapper[4842]: I0202 06:50:31.150142 4842 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:31 crc kubenswrapper[4842]: I0202 06:50:31.153814 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 06:50:31 crc kubenswrapper[4842]: I0202 06:50:31.153904 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f031981f643f5d87b4def10d3e2db442ecf61d86a5b06ab2a2c7e39a48be9b60"} Feb 02 06:50:31 crc kubenswrapper[4842]: I0202 06:50:31.155039 4842 status_manager.go:851] "Failed to get status for pod" podUID="ea82b6bc-5c1e-496e-8501-45fdb7220cbb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:31 crc kubenswrapper[4842]: I0202 06:50:31.155754 4842 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:31 crc kubenswrapper[4842]: I0202 06:50:31.899043 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:50:32 crc kubenswrapper[4842]: I0202 06:50:32.164366 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e30ccc92010a71c941ffa3971080c2714655e55cab7a71a1f0418834a654b59d"} Feb 02 06:50:32 crc kubenswrapper[4842]: I0202 06:50:32.164408 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3a1cad6774cf3511d926447185654d22ba47ce37238d6fec0196a476ad1a4cb2"} Feb 02 06:50:32 crc kubenswrapper[4842]: I0202 06:50:32.164419 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2674c3e849babe1ce160765c5bf41b34ed73314d3d4518a4221eb22d72e68d4b"} Feb 02 06:50:33 crc kubenswrapper[4842]: I0202 06:50:33.171791 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3648926f4df743e137834a490d45f1c3ce203d74c3fec461b83175cf38ade3ad"} Feb 02 06:50:33 crc kubenswrapper[4842]: I0202 06:50:33.172043 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"51668a4b98d59c5beba508371586ec94d9bb4c2af695ce0219bf0d93e8844af4"} Feb 02 06:50:33 crc kubenswrapper[4842]: I0202 06:50:33.172274 4842 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a52fecd8-6250-4bb6-bd2d-5f882a228ccd" Feb 02 06:50:33 crc kubenswrapper[4842]: I0202 06:50:33.172286 4842 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a52fecd8-6250-4bb6-bd2d-5f882a228ccd" Feb 02 06:50:33 crc kubenswrapper[4842]: I0202 06:50:33.172281 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:35 crc kubenswrapper[4842]: I0202 06:50:35.452127 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:35 crc kubenswrapper[4842]: I0202 06:50:35.452419 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:35 crc kubenswrapper[4842]: I0202 06:50:35.463986 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:35 crc kubenswrapper[4842]: I0202 06:50:35.604522 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:50:35 crc kubenswrapper[4842]: I0202 06:50:35.604957 4842 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 02 06:50:35 crc kubenswrapper[4842]: I0202 06:50:35.605130 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 02 06:50:38 crc kubenswrapper[4842]: I0202 06:50:38.187987 4842 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:38 crc kubenswrapper[4842]: I0202 06:50:38.266977 4842 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1fc8a1ea-35cd-4572-a73e-62404385c296" Feb 02 06:50:39 crc kubenswrapper[4842]: I0202 06:50:39.206918 4842 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a52fecd8-6250-4bb6-bd2d-5f882a228ccd" Feb 02 06:50:39 crc kubenswrapper[4842]: I0202 06:50:39.207390 4842 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a52fecd8-6250-4bb6-bd2d-5f882a228ccd" Feb 02 06:50:39 crc kubenswrapper[4842]: I0202 06:50:39.210047 4842 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1fc8a1ea-35cd-4572-a73e-62404385c296" Feb 02 06:50:45 crc kubenswrapper[4842]: I0202 06:50:45.605162 4842 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 02 06:50:45 crc kubenswrapper[4842]: I0202 06:50:45.606863 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 02 06:50:48 crc kubenswrapper[4842]: I0202 06:50:48.693776 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 02 06:50:48 crc kubenswrapper[4842]: I0202 06:50:48.702574 4842 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 02 06:50:49 crc kubenswrapper[4842]: I0202 06:50:49.045766 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 02 06:50:49 crc kubenswrapper[4842]: I0202 06:50:49.269822 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 02 06:50:49 crc kubenswrapper[4842]: I0202 06:50:49.645520 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 02 06:50:49 crc kubenswrapper[4842]: I0202 06:50:49.714748 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 02 06:50:49 crc kubenswrapper[4842]: I0202 06:50:49.801863 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 06:50:49 crc kubenswrapper[4842]: I0202 06:50:49.848548 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 02 06:50:50 crc kubenswrapper[4842]: I0202 06:50:50.214902 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 02 06:50:50 crc kubenswrapper[4842]: I0202 06:50:50.344024 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 02 06:50:50 crc kubenswrapper[4842]: I0202 06:50:50.396377 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 02 06:50:50 crc kubenswrapper[4842]: I0202 06:50:50.413566 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 02 06:50:50 crc kubenswrapper[4842]: I0202 06:50:50.815564 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 06:50:50 crc kubenswrapper[4842]: I0202 06:50:50.911182 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 02 06:50:50 crc kubenswrapper[4842]: I0202 06:50:50.988737 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 06:50:51 crc kubenswrapper[4842]: I0202 06:50:51.114982 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 02 06:50:51 crc kubenswrapper[4842]: I0202 06:50:51.347567 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 02 06:50:51 crc kubenswrapper[4842]: I0202 06:50:51.479155 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 06:50:51 crc kubenswrapper[4842]: I0202 06:50:51.826595 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 02 06:50:51 crc kubenswrapper[4842]: I0202 06:50:51.920078 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 06:50:51 crc kubenswrapper[4842]: I0202 06:50:51.920518 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 02 06:50:51 crc kubenswrapper[4842]: I0202 06:50:51.920278 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 02 06:50:51 crc kubenswrapper[4842]: I0202 06:50:51.922984 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 02 06:50:51 crc kubenswrapper[4842]: I0202 06:50:51.923204 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 02 06:50:51 crc kubenswrapper[4842]: I0202 06:50:51.923374 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 02 06:50:51 crc kubenswrapper[4842]: I0202 06:50:51.923610 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 02 06:50:51 crc kubenswrapper[4842]: I0202 06:50:51.924565 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 02 06:50:51 crc kubenswrapper[4842]: I0202 06:50:51.929111 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 02 06:50:51 crc kubenswrapper[4842]: I0202 06:50:51.932505 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 06:50:51 crc kubenswrapper[4842]: I0202 06:50:51.951507 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 06:50:52 crc kubenswrapper[4842]: I0202 06:50:52.033278 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 02 06:50:52 crc kubenswrapper[4842]: I0202 06:50:52.070708 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 02 06:50:52 crc kubenswrapper[4842]: I0202 06:50:52.121030 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 06:50:52 crc kubenswrapper[4842]: I0202 06:50:52.135076 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 02 06:50:52 crc kubenswrapper[4842]: I0202 06:50:52.159987 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 02 06:50:52 crc kubenswrapper[4842]: I0202 06:50:52.203457 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 06:50:52 crc kubenswrapper[4842]: I0202 06:50:52.336024 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 02 06:50:52 crc kubenswrapper[4842]: I0202 06:50:52.442556 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 02 06:50:52 crc kubenswrapper[4842]: I0202 06:50:52.468205 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 06:50:52 crc kubenswrapper[4842]: I0202 06:50:52.569601 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 02 06:50:52 crc kubenswrapper[4842]: I0202 06:50:52.878601 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 02 06:50:52 crc kubenswrapper[4842]: I0202 06:50:52.951148 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.000507 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.025647 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.034985 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.035170 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.060335 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.078056 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.126992 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.214656 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.266031 4842 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.305484 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.365325 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.482660 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.521509 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.535523 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.577466 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.586530 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.671456 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.747702 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.900515 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.919534 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.959733 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.993886 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.018106 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.037302 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.097279 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.194836 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.262158 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.312047 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.412315 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.412680 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.419301 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.433513 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.552946 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.568269 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.579550 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.628765 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.664033 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.697489 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.716564 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.733817 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.742952 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.748121 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.750038 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.763643 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.777889 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.809913 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.818247 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.974758 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.018613 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.022324 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.039645 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.065752 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.139121 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.148335 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.252684 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.285747 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.292844 4842 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.304147 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.458901 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.540792 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.604998 4842 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.605092 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.605187 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.606210 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"f031981f643f5d87b4def10d3e2db442ecf61d86a5b06ab2a2c7e39a48be9b60"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.606505 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://f031981f643f5d87b4def10d3e2db442ecf61d86a5b06ab2a2c7e39a48be9b60" gracePeriod=30 Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.607664 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.783209 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.838086 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.843405 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.894823 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.925028 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.946190 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.963899 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.989137 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.055960 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.071262 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.078363 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.124974 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.259755 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.279989 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.355940 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.409929 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.422265 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.439818 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.535086 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.580828 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.583332 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.605272 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.613673 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.662658 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.700192 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.730288 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.731999 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.759370 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.773271 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.816888 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.890023 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.899682 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.938810 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.956930 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.960662 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.058152 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.142299 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.225262 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.373400 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.403525 4842 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.443071 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.481309 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.499867 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.506965 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.609865 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.649510 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.662201 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.728754 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.744702 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.776849 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.787915 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.820060 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.836656 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.839862 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.874249 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.998156 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.032811 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.154822 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.172077 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.266749 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.334719 4842 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.341607 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.341673 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.348972 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.349847 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.370821 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=20.370799425 podStartE2EDuration="20.370799425s" podCreationTimestamp="2026-02-02 06:50:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:50:58.367002048 +0000 UTC m=+283.744269990" watchObservedRunningTime="2026-02-02 06:50:58.370799425 +0000 UTC m=+283.748067377" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.376863 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.377174 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.470013 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.509503 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.787778 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.810950 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.818056 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.879044 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.940067 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.947951 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.974750 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 02 06:50:59 crc kubenswrapper[4842]: I0202 06:50:59.006052 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 06:50:59 crc kubenswrapper[4842]: I0202 06:50:59.070888 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 02 06:50:59 crc kubenswrapper[4842]: I0202 06:50:59.152471 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 02 06:50:59 crc kubenswrapper[4842]: I0202 06:50:59.498332 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 02 06:50:59 crc kubenswrapper[4842]: I0202 06:50:59.524649 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 02 06:50:59 crc kubenswrapper[4842]: I0202 06:50:59.533650 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 02 06:50:59 crc kubenswrapper[4842]: I0202 06:50:59.609767 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 06:50:59 crc kubenswrapper[4842]: I0202 06:50:59.750022 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 02 06:50:59 crc kubenswrapper[4842]: I0202 06:50:59.802337 4842 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 02 06:50:59 crc kubenswrapper[4842]: I0202 06:50:59.853803 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 02 06:50:59 crc kubenswrapper[4842]: I0202 06:50:59.872751 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 02 06:50:59 crc kubenswrapper[4842]: I0202 06:50:59.885025 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 06:50:59 crc kubenswrapper[4842]: I0202 06:50:59.890901 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.052732 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.085829 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.096541 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.115445 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.320197 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.354673 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.475364 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.512814 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.522751 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.539161 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.712914 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.817875 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.887039 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.888714 4842 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.889183 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://52658e1427cd8c9c3ef6d07e7765f9b82d90bd1dc21508676eb83936020b6106" gracePeriod=5 Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.904145 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.927341 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.998388 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 02 06:51:01 crc kubenswrapper[4842]: I0202 06:51:01.028730 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 02 06:51:01 crc kubenswrapper[4842]: I0202 06:51:01.055893 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 02 06:51:01 crc kubenswrapper[4842]: I0202 06:51:01.108953 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 02 06:51:01 crc kubenswrapper[4842]: I0202 06:51:01.149165 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 02 06:51:01 crc kubenswrapper[4842]: I0202 06:51:01.407802 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 02 06:51:01 crc kubenswrapper[4842]: I0202 06:51:01.441447 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 02 06:51:01 crc kubenswrapper[4842]: I0202 06:51:01.521550 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 02 06:51:01 crc kubenswrapper[4842]: I0202 06:51:01.527097 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 02 06:51:01 crc kubenswrapper[4842]: I0202 06:51:01.535627 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 02 06:51:01 crc kubenswrapper[4842]: I0202 06:51:01.564743 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 02 06:51:01 crc kubenswrapper[4842]: I0202 06:51:01.654564 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 02 06:51:01 crc kubenswrapper[4842]: I0202 06:51:01.856683 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 02 06:51:01 crc kubenswrapper[4842]: I0202 06:51:01.858921 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.020665 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.091441 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.128427 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.157699 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.180207 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.220496 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.328272 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.333103 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.373614 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.390607 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.396388 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.404822 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.417820 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.454450 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.475984 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.573110 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.591638 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.740998 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.800307 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.849057 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.887957 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 06:51:03 crc kubenswrapper[4842]: I0202 06:51:03.002173 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 02 06:51:03 crc kubenswrapper[4842]: I0202 06:51:03.098677 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 02 06:51:03 crc kubenswrapper[4842]: I0202 06:51:03.170964 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 02 06:51:03 crc kubenswrapper[4842]: I0202 06:51:03.257037 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 02 06:51:03 crc kubenswrapper[4842]: I0202 06:51:03.273374 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 06:51:03 crc kubenswrapper[4842]: I0202 06:51:03.493670 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 06:51:03 crc kubenswrapper[4842]: I0202 06:51:03.772677 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 02 06:51:03 crc kubenswrapper[4842]: I0202 06:51:03.821839 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 02 06:51:04 crc kubenswrapper[4842]: I0202 06:51:04.135930 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 02 06:51:04 crc kubenswrapper[4842]: I0202 06:51:04.925776 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 02 06:51:05 crc kubenswrapper[4842]: I0202 06:51:05.514749 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 02 06:51:05 crc kubenswrapper[4842]: I0202 06:51:05.834137 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.013721 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.013789 4842 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="52658e1427cd8c9c3ef6d07e7765f9b82d90bd1dc21508676eb83936020b6106" exitCode=137 Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.498732 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.498849 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.648919 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.648977 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.649054 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.649113 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.649129 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.649147 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.649195 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.649206 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.649325 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.650454 4842 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.650510 4842 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.650535 4842 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.650556 4842 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.661149 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.751878 4842 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:07 crc kubenswrapper[4842]: I0202 06:51:07.023004 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 02 06:51:07 crc kubenswrapper[4842]: I0202 06:51:07.023435 4842 scope.go:117] "RemoveContainer" containerID="52658e1427cd8c9c3ef6d07e7765f9b82d90bd1dc21508676eb83936020b6106" Feb 02 06:51:07 crc kubenswrapper[4842]: I0202 06:51:07.023521 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:51:07 crc kubenswrapper[4842]: I0202 06:51:07.241412 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 02 06:51:07 crc kubenswrapper[4842]: I0202 06:51:07.443399 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 02 06:51:07 crc kubenswrapper[4842]: I0202 06:51:07.508412 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.277263 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-74vp9"] Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.278615 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-74vp9" podUID="671957e9-c40d-416d-8756-a4d7f0abc317" containerName="registry-server" containerID="cri-o://6d298e427c89cc0e226b9524675d73810802c4e0496cc96fde4fe468577994ca" gracePeriod=30 Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.288896 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z5jt7"] Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.289558 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z5jt7" podUID="69e94ec9-2a3b-4f85-a2b7-9e2f07359890" containerName="registry-server" containerID="cri-o://85f5ced4ee389cf80b2537c6c6be6222dce94b986e1132434f4b542801563946" gracePeriod=30 Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.305680 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bzsxn"] Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.305989 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" podUID="c4f753a1-ecf0-4b2c-9121-989677c6b2a6" containerName="marketplace-operator" containerID="cri-o://817668898fab5e51b3abf3f80425b72d1a70674bf923b8b7745e92d2599cc31a" gracePeriod=30 Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.328739 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2j5m"] Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.329194 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m2j5m" podUID="de569fea-56ca-4762-9a22-a12561c296b6" containerName="registry-server" containerID="cri-o://c1ebf104341f1b64aeb385d1323c7703ec3930f4b05b44743081df564666a025" gracePeriod=30 Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.336942 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5l5m7"] Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.337316 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5l5m7" podUID="99088cf9-5dcc-4837-943b-4deca45c1401" containerName="registry-server" containerID="cri-o://d50c37c1b7039a80441e89dbdfb8b545c69d2e2508f8a898b31ac557a8166b6a" gracePeriod=30 Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.807552 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.899471 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.906236 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.909584 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.932900 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.939595 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-utilities\") pod \"69e94ec9-2a3b-4f85-a2b7-9e2f07359890\" (UID: \"69e94ec9-2a3b-4f85-a2b7-9e2f07359890\") " Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.939632 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q662f\" (UniqueName: \"kubernetes.io/projected/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-kube-api-access-q662f\") pod \"69e94ec9-2a3b-4f85-a2b7-9e2f07359890\" (UID: \"69e94ec9-2a3b-4f85-a2b7-9e2f07359890\") " Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.939688 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-catalog-content\") pod \"69e94ec9-2a3b-4f85-a2b7-9e2f07359890\" (UID: \"69e94ec9-2a3b-4f85-a2b7-9e2f07359890\") " Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.940620 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-utilities" (OuterVolumeSpecName: "utilities") pod "69e94ec9-2a3b-4f85-a2b7-9e2f07359890" (UID: "69e94ec9-2a3b-4f85-a2b7-9e2f07359890"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.949534 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-kube-api-access-q662f" (OuterVolumeSpecName: "kube-api-access-q662f") pod "69e94ec9-2a3b-4f85-a2b7-9e2f07359890" (UID: "69e94ec9-2a3b-4f85-a2b7-9e2f07359890"). InnerVolumeSpecName "kube-api-access-q662f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.996615 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69e94ec9-2a3b-4f85-a2b7-9e2f07359890" (UID: "69e94ec9-2a3b-4f85-a2b7-9e2f07359890"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.040963 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwwsr\" (UniqueName: \"kubernetes.io/projected/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-kube-api-access-pwwsr\") pod \"c4f753a1-ecf0-4b2c-9121-989677c6b2a6\" (UID: \"c4f753a1-ecf0-4b2c-9121-989677c6b2a6\") " Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.041120 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8k4r4\" (UniqueName: \"kubernetes.io/projected/de569fea-56ca-4762-9a22-a12561c296b6-kube-api-access-8k4r4\") pod \"de569fea-56ca-4762-9a22-a12561c296b6\" (UID: \"de569fea-56ca-4762-9a22-a12561c296b6\") " Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.041279 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8v2l\" (UniqueName: \"kubernetes.io/projected/671957e9-c40d-416d-8756-a4d7f0abc317-kube-api-access-p8v2l\") pod \"671957e9-c40d-416d-8756-a4d7f0abc317\" (UID: \"671957e9-c40d-416d-8756-a4d7f0abc317\") " Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.041406 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-marketplace-trusted-ca\") pod \"c4f753a1-ecf0-4b2c-9121-989677c6b2a6\" (UID: \"c4f753a1-ecf0-4b2c-9121-989677c6b2a6\") " Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.041537 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99088cf9-5dcc-4837-943b-4deca45c1401-utilities\") pod \"99088cf9-5dcc-4837-943b-4deca45c1401\" (UID: \"99088cf9-5dcc-4837-943b-4deca45c1401\") " Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.041671 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de569fea-56ca-4762-9a22-a12561c296b6-catalog-content\") pod \"de569fea-56ca-4762-9a22-a12561c296b6\" (UID: \"de569fea-56ca-4762-9a22-a12561c296b6\") " Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.041771 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gfrg\" (UniqueName: \"kubernetes.io/projected/99088cf9-5dcc-4837-943b-4deca45c1401-kube-api-access-7gfrg\") pod \"99088cf9-5dcc-4837-943b-4deca45c1401\" (UID: \"99088cf9-5dcc-4837-943b-4deca45c1401\") " Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.041878 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de569fea-56ca-4762-9a22-a12561c296b6-utilities\") pod \"de569fea-56ca-4762-9a22-a12561c296b6\" (UID: \"de569fea-56ca-4762-9a22-a12561c296b6\") " Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.041982 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/671957e9-c40d-416d-8756-a4d7f0abc317-catalog-content\") pod \"671957e9-c40d-416d-8756-a4d7f0abc317\" (UID: \"671957e9-c40d-416d-8756-a4d7f0abc317\") " Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.042082 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/671957e9-c40d-416d-8756-a4d7f0abc317-utilities\") pod \"671957e9-c40d-416d-8756-a4d7f0abc317\" (UID: \"671957e9-c40d-416d-8756-a4d7f0abc317\") " Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.042196 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99088cf9-5dcc-4837-943b-4deca45c1401-catalog-content\") pod \"99088cf9-5dcc-4837-943b-4deca45c1401\" (UID: \"99088cf9-5dcc-4837-943b-4deca45c1401\") " Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.042337 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-marketplace-operator-metrics\") pod \"c4f753a1-ecf0-4b2c-9121-989677c6b2a6\" (UID: \"c4f753a1-ecf0-4b2c-9121-989677c6b2a6\") " Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.042585 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.042680 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q662f\" (UniqueName: \"kubernetes.io/projected/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-kube-api-access-q662f\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.042781 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.042039 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "c4f753a1-ecf0-4b2c-9121-989677c6b2a6" (UID: "c4f753a1-ecf0-4b2c-9121-989677c6b2a6"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.042723 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/671957e9-c40d-416d-8756-a4d7f0abc317-utilities" (OuterVolumeSpecName: "utilities") pod "671957e9-c40d-416d-8756-a4d7f0abc317" (UID: "671957e9-c40d-416d-8756-a4d7f0abc317"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.042801 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de569fea-56ca-4762-9a22-a12561c296b6-utilities" (OuterVolumeSpecName: "utilities") pod "de569fea-56ca-4762-9a22-a12561c296b6" (UID: "de569fea-56ca-4762-9a22-a12561c296b6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.044531 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de569fea-56ca-4762-9a22-a12561c296b6-kube-api-access-8k4r4" (OuterVolumeSpecName: "kube-api-access-8k4r4") pod "de569fea-56ca-4762-9a22-a12561c296b6" (UID: "de569fea-56ca-4762-9a22-a12561c296b6"). InnerVolumeSpecName "kube-api-access-8k4r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.045126 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/671957e9-c40d-416d-8756-a4d7f0abc317-kube-api-access-p8v2l" (OuterVolumeSpecName: "kube-api-access-p8v2l") pod "671957e9-c40d-416d-8756-a4d7f0abc317" (UID: "671957e9-c40d-416d-8756-a4d7f0abc317"). InnerVolumeSpecName "kube-api-access-p8v2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.045308 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-kube-api-access-pwwsr" (OuterVolumeSpecName: "kube-api-access-pwwsr") pod "c4f753a1-ecf0-4b2c-9121-989677c6b2a6" (UID: "c4f753a1-ecf0-4b2c-9121-989677c6b2a6"). InnerVolumeSpecName "kube-api-access-pwwsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.045304 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99088cf9-5dcc-4837-943b-4deca45c1401-utilities" (OuterVolumeSpecName: "utilities") pod "99088cf9-5dcc-4837-943b-4deca45c1401" (UID: "99088cf9-5dcc-4837-943b-4deca45c1401"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.046152 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99088cf9-5dcc-4837-943b-4deca45c1401-kube-api-access-7gfrg" (OuterVolumeSpecName: "kube-api-access-7gfrg") pod "99088cf9-5dcc-4837-943b-4deca45c1401" (UID: "99088cf9-5dcc-4837-943b-4deca45c1401"). InnerVolumeSpecName "kube-api-access-7gfrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.047390 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "c4f753a1-ecf0-4b2c-9121-989677c6b2a6" (UID: "c4f753a1-ecf0-4b2c-9121-989677c6b2a6"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.072368 4842 generic.go:334] "Generic (PLEG): container finished" podID="c4f753a1-ecf0-4b2c-9121-989677c6b2a6" containerID="817668898fab5e51b3abf3f80425b72d1a70674bf923b8b7745e92d2599cc31a" exitCode=0 Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.072466 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.072483 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" event={"ID":"c4f753a1-ecf0-4b2c-9121-989677c6b2a6","Type":"ContainerDied","Data":"817668898fab5e51b3abf3f80425b72d1a70674bf923b8b7745e92d2599cc31a"} Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.072521 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" event={"ID":"c4f753a1-ecf0-4b2c-9121-989677c6b2a6","Type":"ContainerDied","Data":"86551bfa40b78ac651aa4bb3b08214372121725e7903350eb4635288d82753ac"} Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.072543 4842 scope.go:117] "RemoveContainer" containerID="817668898fab5e51b3abf3f80425b72d1a70674bf923b8b7745e92d2599cc31a" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.076180 4842 generic.go:334] "Generic (PLEG): container finished" podID="de569fea-56ca-4762-9a22-a12561c296b6" containerID="c1ebf104341f1b64aeb385d1323c7703ec3930f4b05b44743081df564666a025" exitCode=0 Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.076270 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.076237 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2j5m" event={"ID":"de569fea-56ca-4762-9a22-a12561c296b6","Type":"ContainerDied","Data":"c1ebf104341f1b64aeb385d1323c7703ec3930f4b05b44743081df564666a025"} Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.076754 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2j5m" event={"ID":"de569fea-56ca-4762-9a22-a12561c296b6","Type":"ContainerDied","Data":"281d01870ece6a3181561fda9dfe308cdde10657dccb47ecb2c8628297416b48"} Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.080062 4842 generic.go:334] "Generic (PLEG): container finished" podID="671957e9-c40d-416d-8756-a4d7f0abc317" containerID="6d298e427c89cc0e226b9524675d73810802c4e0496cc96fde4fe468577994ca" exitCode=0 Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.080202 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74vp9" event={"ID":"671957e9-c40d-416d-8756-a4d7f0abc317","Type":"ContainerDied","Data":"6d298e427c89cc0e226b9524675d73810802c4e0496cc96fde4fe468577994ca"} Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.080344 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74vp9" event={"ID":"671957e9-c40d-416d-8756-a4d7f0abc317","Type":"ContainerDied","Data":"e77b162572adbddd868d73ee2b2382cf4886626b5d00d4cbd3b5a5a655acde51"} Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.080538 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.083561 4842 generic.go:334] "Generic (PLEG): container finished" podID="69e94ec9-2a3b-4f85-a2b7-9e2f07359890" containerID="85f5ced4ee389cf80b2537c6c6be6222dce94b986e1132434f4b542801563946" exitCode=0 Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.083697 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5jt7" event={"ID":"69e94ec9-2a3b-4f85-a2b7-9e2f07359890","Type":"ContainerDied","Data":"85f5ced4ee389cf80b2537c6c6be6222dce94b986e1132434f4b542801563946"} Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.083741 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5jt7" event={"ID":"69e94ec9-2a3b-4f85-a2b7-9e2f07359890","Type":"ContainerDied","Data":"70b3737c860965567c6708a9ff4cb3684a5c902cd3e8826074cbb967adb64bfe"} Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.083860 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.086665 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de569fea-56ca-4762-9a22-a12561c296b6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de569fea-56ca-4762-9a22-a12561c296b6" (UID: "de569fea-56ca-4762-9a22-a12561c296b6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.090241 4842 generic.go:334] "Generic (PLEG): container finished" podID="99088cf9-5dcc-4837-943b-4deca45c1401" containerID="d50c37c1b7039a80441e89dbdfb8b545c69d2e2508f8a898b31ac557a8166b6a" exitCode=0 Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.090371 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5l5m7" event={"ID":"99088cf9-5dcc-4837-943b-4deca45c1401","Type":"ContainerDied","Data":"d50c37c1b7039a80441e89dbdfb8b545c69d2e2508f8a898b31ac557a8166b6a"} Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.090475 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5l5m7" event={"ID":"99088cf9-5dcc-4837-943b-4deca45c1401","Type":"ContainerDied","Data":"535c1c949c7f7fddcdec8bd932015e6668761ecd24e167f9b71ea785616441c9"} Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.090484 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.104536 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bzsxn"] Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.109123 4842 scope.go:117] "RemoveContainer" containerID="817668898fab5e51b3abf3f80425b72d1a70674bf923b8b7745e92d2599cc31a" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.109411 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bzsxn"] Feb 02 06:51:12 crc kubenswrapper[4842]: E0202 06:51:12.109838 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"817668898fab5e51b3abf3f80425b72d1a70674bf923b8b7745e92d2599cc31a\": container with ID starting with 817668898fab5e51b3abf3f80425b72d1a70674bf923b8b7745e92d2599cc31a not found: ID does not exist" containerID="817668898fab5e51b3abf3f80425b72d1a70674bf923b8b7745e92d2599cc31a" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.109879 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"817668898fab5e51b3abf3f80425b72d1a70674bf923b8b7745e92d2599cc31a"} err="failed to get container status \"817668898fab5e51b3abf3f80425b72d1a70674bf923b8b7745e92d2599cc31a\": rpc error: code = NotFound desc = could not find container \"817668898fab5e51b3abf3f80425b72d1a70674bf923b8b7745e92d2599cc31a\": container with ID starting with 817668898fab5e51b3abf3f80425b72d1a70674bf923b8b7745e92d2599cc31a not found: ID does not exist" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.109906 4842 scope.go:117] "RemoveContainer" containerID="c1ebf104341f1b64aeb385d1323c7703ec3930f4b05b44743081df564666a025" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.122079 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z5jt7"] Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.126143 4842 scope.go:117] "RemoveContainer" containerID="d76e8f3ff3b70f696577be9bac74169cf5aa0f3b5bca4534248c237af1a174ae" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.129035 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z5jt7"] Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.134324 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/671957e9-c40d-416d-8756-a4d7f0abc317-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "671957e9-c40d-416d-8756-a4d7f0abc317" (UID: "671957e9-c40d-416d-8756-a4d7f0abc317"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.139520 4842 scope.go:117] "RemoveContainer" containerID="cf10c220f8e4c7c18d7b3b75f229bca5f01dcb18f6861f8710751c184d04121c" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.143780 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/671957e9-c40d-416d-8756-a4d7f0abc317-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.143802 4842 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.143813 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwwsr\" (UniqueName: \"kubernetes.io/projected/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-kube-api-access-pwwsr\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.143821 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8k4r4\" (UniqueName: \"kubernetes.io/projected/de569fea-56ca-4762-9a22-a12561c296b6-kube-api-access-8k4r4\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.143830 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8v2l\" (UniqueName: \"kubernetes.io/projected/671957e9-c40d-416d-8756-a4d7f0abc317-kube-api-access-p8v2l\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.143838 4842 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.143846 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99088cf9-5dcc-4837-943b-4deca45c1401-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.143855 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de569fea-56ca-4762-9a22-a12561c296b6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.143863 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gfrg\" (UniqueName: \"kubernetes.io/projected/99088cf9-5dcc-4837-943b-4deca45c1401-kube-api-access-7gfrg\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.143871 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de569fea-56ca-4762-9a22-a12561c296b6-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.143879 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/671957e9-c40d-416d-8756-a4d7f0abc317-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.158543 4842 scope.go:117] "RemoveContainer" containerID="c1ebf104341f1b64aeb385d1323c7703ec3930f4b05b44743081df564666a025" Feb 02 06:51:12 crc kubenswrapper[4842]: E0202 06:51:12.159022 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1ebf104341f1b64aeb385d1323c7703ec3930f4b05b44743081df564666a025\": container with ID starting with c1ebf104341f1b64aeb385d1323c7703ec3930f4b05b44743081df564666a025 not found: ID does not exist" containerID="c1ebf104341f1b64aeb385d1323c7703ec3930f4b05b44743081df564666a025" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.159069 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ebf104341f1b64aeb385d1323c7703ec3930f4b05b44743081df564666a025"} err="failed to get container status \"c1ebf104341f1b64aeb385d1323c7703ec3930f4b05b44743081df564666a025\": rpc error: code = NotFound desc = could not find container \"c1ebf104341f1b64aeb385d1323c7703ec3930f4b05b44743081df564666a025\": container with ID starting with c1ebf104341f1b64aeb385d1323c7703ec3930f4b05b44743081df564666a025 not found: ID does not exist" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.159098 4842 scope.go:117] "RemoveContainer" containerID="d76e8f3ff3b70f696577be9bac74169cf5aa0f3b5bca4534248c237af1a174ae" Feb 02 06:51:12 crc kubenswrapper[4842]: E0202 06:51:12.159618 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d76e8f3ff3b70f696577be9bac74169cf5aa0f3b5bca4534248c237af1a174ae\": container with ID starting with d76e8f3ff3b70f696577be9bac74169cf5aa0f3b5bca4534248c237af1a174ae not found: ID does not exist" containerID="d76e8f3ff3b70f696577be9bac74169cf5aa0f3b5bca4534248c237af1a174ae" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.159644 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d76e8f3ff3b70f696577be9bac74169cf5aa0f3b5bca4534248c237af1a174ae"} err="failed to get container status \"d76e8f3ff3b70f696577be9bac74169cf5aa0f3b5bca4534248c237af1a174ae\": rpc error: code = NotFound desc = could not find container \"d76e8f3ff3b70f696577be9bac74169cf5aa0f3b5bca4534248c237af1a174ae\": container with ID starting with d76e8f3ff3b70f696577be9bac74169cf5aa0f3b5bca4534248c237af1a174ae not found: ID does not exist" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.159656 4842 scope.go:117] "RemoveContainer" containerID="cf10c220f8e4c7c18d7b3b75f229bca5f01dcb18f6861f8710751c184d04121c" Feb 02 06:51:12 crc kubenswrapper[4842]: E0202 06:51:12.159952 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf10c220f8e4c7c18d7b3b75f229bca5f01dcb18f6861f8710751c184d04121c\": container with ID starting with cf10c220f8e4c7c18d7b3b75f229bca5f01dcb18f6861f8710751c184d04121c not found: ID does not exist" containerID="cf10c220f8e4c7c18d7b3b75f229bca5f01dcb18f6861f8710751c184d04121c" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.159998 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf10c220f8e4c7c18d7b3b75f229bca5f01dcb18f6861f8710751c184d04121c"} err="failed to get container status \"cf10c220f8e4c7c18d7b3b75f229bca5f01dcb18f6861f8710751c184d04121c\": rpc error: code = NotFound desc = could not find container \"cf10c220f8e4c7c18d7b3b75f229bca5f01dcb18f6861f8710751c184d04121c\": container with ID starting with cf10c220f8e4c7c18d7b3b75f229bca5f01dcb18f6861f8710751c184d04121c not found: ID does not exist" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.160033 4842 scope.go:117] "RemoveContainer" containerID="6d298e427c89cc0e226b9524675d73810802c4e0496cc96fde4fe468577994ca" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.175406 4842 scope.go:117] "RemoveContainer" containerID="e91b403fa46440a27510eeae00f55f43951f4cf12111dd68ea6cfd1f20c38551" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.195194 4842 scope.go:117] "RemoveContainer" containerID="9bcffd62e37a672e39a6787f2c243578a0cd1be1df69a60bcc2f0670e3497e99" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.205734 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99088cf9-5dcc-4837-943b-4deca45c1401-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99088cf9-5dcc-4837-943b-4deca45c1401" (UID: "99088cf9-5dcc-4837-943b-4deca45c1401"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.208841 4842 scope.go:117] "RemoveContainer" containerID="6d298e427c89cc0e226b9524675d73810802c4e0496cc96fde4fe468577994ca" Feb 02 06:51:12 crc kubenswrapper[4842]: E0202 06:51:12.209151 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d298e427c89cc0e226b9524675d73810802c4e0496cc96fde4fe468577994ca\": container with ID starting with 6d298e427c89cc0e226b9524675d73810802c4e0496cc96fde4fe468577994ca not found: ID does not exist" containerID="6d298e427c89cc0e226b9524675d73810802c4e0496cc96fde4fe468577994ca" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.209186 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d298e427c89cc0e226b9524675d73810802c4e0496cc96fde4fe468577994ca"} err="failed to get container status \"6d298e427c89cc0e226b9524675d73810802c4e0496cc96fde4fe468577994ca\": rpc error: code = NotFound desc = could not find container \"6d298e427c89cc0e226b9524675d73810802c4e0496cc96fde4fe468577994ca\": container with ID starting with 6d298e427c89cc0e226b9524675d73810802c4e0496cc96fde4fe468577994ca not found: ID does not exist" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.209251 4842 scope.go:117] "RemoveContainer" containerID="e91b403fa46440a27510eeae00f55f43951f4cf12111dd68ea6cfd1f20c38551" Feb 02 06:51:12 crc kubenswrapper[4842]: E0202 06:51:12.209595 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e91b403fa46440a27510eeae00f55f43951f4cf12111dd68ea6cfd1f20c38551\": container with ID starting with e91b403fa46440a27510eeae00f55f43951f4cf12111dd68ea6cfd1f20c38551 not found: ID does not exist" containerID="e91b403fa46440a27510eeae00f55f43951f4cf12111dd68ea6cfd1f20c38551" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.209634 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e91b403fa46440a27510eeae00f55f43951f4cf12111dd68ea6cfd1f20c38551"} err="failed to get container status \"e91b403fa46440a27510eeae00f55f43951f4cf12111dd68ea6cfd1f20c38551\": rpc error: code = NotFound desc = could not find container \"e91b403fa46440a27510eeae00f55f43951f4cf12111dd68ea6cfd1f20c38551\": container with ID starting with e91b403fa46440a27510eeae00f55f43951f4cf12111dd68ea6cfd1f20c38551 not found: ID does not exist" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.209662 4842 scope.go:117] "RemoveContainer" containerID="9bcffd62e37a672e39a6787f2c243578a0cd1be1df69a60bcc2f0670e3497e99" Feb 02 06:51:12 crc kubenswrapper[4842]: E0202 06:51:12.209998 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bcffd62e37a672e39a6787f2c243578a0cd1be1df69a60bcc2f0670e3497e99\": container with ID starting with 9bcffd62e37a672e39a6787f2c243578a0cd1be1df69a60bcc2f0670e3497e99 not found: ID does not exist" containerID="9bcffd62e37a672e39a6787f2c243578a0cd1be1df69a60bcc2f0670e3497e99" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.210035 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bcffd62e37a672e39a6787f2c243578a0cd1be1df69a60bcc2f0670e3497e99"} err="failed to get container status \"9bcffd62e37a672e39a6787f2c243578a0cd1be1df69a60bcc2f0670e3497e99\": rpc error: code = NotFound desc = could not find container \"9bcffd62e37a672e39a6787f2c243578a0cd1be1df69a60bcc2f0670e3497e99\": container with ID starting with 9bcffd62e37a672e39a6787f2c243578a0cd1be1df69a60bcc2f0670e3497e99 not found: ID does not exist" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.210056 4842 scope.go:117] "RemoveContainer" containerID="85f5ced4ee389cf80b2537c6c6be6222dce94b986e1132434f4b542801563946" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.227693 4842 scope.go:117] "RemoveContainer" containerID="e44426c8cdd109cadacef3f6400e5d74ea8d1d653b5ed8dbe5f5917e6c3ffd35" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.245519 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99088cf9-5dcc-4837-943b-4deca45c1401-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.250126 4842 scope.go:117] "RemoveContainer" containerID="fe4e6b5eae92ea98fb26f6084fef88f48ca6a4485abf0bfb20d4e4bb6702033a" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.266647 4842 scope.go:117] "RemoveContainer" containerID="85f5ced4ee389cf80b2537c6c6be6222dce94b986e1132434f4b542801563946" Feb 02 06:51:12 crc kubenswrapper[4842]: E0202 06:51:12.267228 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85f5ced4ee389cf80b2537c6c6be6222dce94b986e1132434f4b542801563946\": container with ID starting with 85f5ced4ee389cf80b2537c6c6be6222dce94b986e1132434f4b542801563946 not found: ID does not exist" containerID="85f5ced4ee389cf80b2537c6c6be6222dce94b986e1132434f4b542801563946" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.267263 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85f5ced4ee389cf80b2537c6c6be6222dce94b986e1132434f4b542801563946"} err="failed to get container status \"85f5ced4ee389cf80b2537c6c6be6222dce94b986e1132434f4b542801563946\": rpc error: code = NotFound desc = could not find container \"85f5ced4ee389cf80b2537c6c6be6222dce94b986e1132434f4b542801563946\": container with ID starting with 85f5ced4ee389cf80b2537c6c6be6222dce94b986e1132434f4b542801563946 not found: ID does not exist" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.267290 4842 scope.go:117] "RemoveContainer" containerID="e44426c8cdd109cadacef3f6400e5d74ea8d1d653b5ed8dbe5f5917e6c3ffd35" Feb 02 06:51:12 crc kubenswrapper[4842]: E0202 06:51:12.267768 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e44426c8cdd109cadacef3f6400e5d74ea8d1d653b5ed8dbe5f5917e6c3ffd35\": container with ID starting with e44426c8cdd109cadacef3f6400e5d74ea8d1d653b5ed8dbe5f5917e6c3ffd35 not found: ID does not exist" containerID="e44426c8cdd109cadacef3f6400e5d74ea8d1d653b5ed8dbe5f5917e6c3ffd35" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.267792 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e44426c8cdd109cadacef3f6400e5d74ea8d1d653b5ed8dbe5f5917e6c3ffd35"} err="failed to get container status \"e44426c8cdd109cadacef3f6400e5d74ea8d1d653b5ed8dbe5f5917e6c3ffd35\": rpc error: code = NotFound desc = could not find container \"e44426c8cdd109cadacef3f6400e5d74ea8d1d653b5ed8dbe5f5917e6c3ffd35\": container with ID starting with e44426c8cdd109cadacef3f6400e5d74ea8d1d653b5ed8dbe5f5917e6c3ffd35 not found: ID does not exist" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.267810 4842 scope.go:117] "RemoveContainer" containerID="fe4e6b5eae92ea98fb26f6084fef88f48ca6a4485abf0bfb20d4e4bb6702033a" Feb 02 06:51:12 crc kubenswrapper[4842]: E0202 06:51:12.268201 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe4e6b5eae92ea98fb26f6084fef88f48ca6a4485abf0bfb20d4e4bb6702033a\": container with ID starting with fe4e6b5eae92ea98fb26f6084fef88f48ca6a4485abf0bfb20d4e4bb6702033a not found: ID does not exist" containerID="fe4e6b5eae92ea98fb26f6084fef88f48ca6a4485abf0bfb20d4e4bb6702033a" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.268241 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe4e6b5eae92ea98fb26f6084fef88f48ca6a4485abf0bfb20d4e4bb6702033a"} err="failed to get container status \"fe4e6b5eae92ea98fb26f6084fef88f48ca6a4485abf0bfb20d4e4bb6702033a\": rpc error: code = NotFound desc = could not find container \"fe4e6b5eae92ea98fb26f6084fef88f48ca6a4485abf0bfb20d4e4bb6702033a\": container with ID starting with fe4e6b5eae92ea98fb26f6084fef88f48ca6a4485abf0bfb20d4e4bb6702033a not found: ID does not exist" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.268257 4842 scope.go:117] "RemoveContainer" containerID="d50c37c1b7039a80441e89dbdfb8b545c69d2e2508f8a898b31ac557a8166b6a" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.283233 4842 scope.go:117] "RemoveContainer" containerID="6a2e8fb4961b678938d98e90622e1cbdba67d44fcb1494b89358728417072d41" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.300693 4842 scope.go:117] "RemoveContainer" containerID="4762ff727f3a29ba6e1e6ee69579ecdb61b217f4f4f61f0b0baff1fd8408e164" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.317508 4842 scope.go:117] "RemoveContainer" containerID="d50c37c1b7039a80441e89dbdfb8b545c69d2e2508f8a898b31ac557a8166b6a" Feb 02 06:51:12 crc kubenswrapper[4842]: E0202 06:51:12.317845 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d50c37c1b7039a80441e89dbdfb8b545c69d2e2508f8a898b31ac557a8166b6a\": container with ID starting with d50c37c1b7039a80441e89dbdfb8b545c69d2e2508f8a898b31ac557a8166b6a not found: ID does not exist" containerID="d50c37c1b7039a80441e89dbdfb8b545c69d2e2508f8a898b31ac557a8166b6a" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.317947 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d50c37c1b7039a80441e89dbdfb8b545c69d2e2508f8a898b31ac557a8166b6a"} err="failed to get container status \"d50c37c1b7039a80441e89dbdfb8b545c69d2e2508f8a898b31ac557a8166b6a\": rpc error: code = NotFound desc = could not find container \"d50c37c1b7039a80441e89dbdfb8b545c69d2e2508f8a898b31ac557a8166b6a\": container with ID starting with d50c37c1b7039a80441e89dbdfb8b545c69d2e2508f8a898b31ac557a8166b6a not found: ID does not exist" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.318001 4842 scope.go:117] "RemoveContainer" containerID="6a2e8fb4961b678938d98e90622e1cbdba67d44fcb1494b89358728417072d41" Feb 02 06:51:12 crc kubenswrapper[4842]: E0202 06:51:12.318490 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a2e8fb4961b678938d98e90622e1cbdba67d44fcb1494b89358728417072d41\": container with ID starting with 6a2e8fb4961b678938d98e90622e1cbdba67d44fcb1494b89358728417072d41 not found: ID does not exist" containerID="6a2e8fb4961b678938d98e90622e1cbdba67d44fcb1494b89358728417072d41" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.318528 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a2e8fb4961b678938d98e90622e1cbdba67d44fcb1494b89358728417072d41"} err="failed to get container status \"6a2e8fb4961b678938d98e90622e1cbdba67d44fcb1494b89358728417072d41\": rpc error: code = NotFound desc = could not find container \"6a2e8fb4961b678938d98e90622e1cbdba67d44fcb1494b89358728417072d41\": container with ID starting with 6a2e8fb4961b678938d98e90622e1cbdba67d44fcb1494b89358728417072d41 not found: ID does not exist" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.318557 4842 scope.go:117] "RemoveContainer" containerID="4762ff727f3a29ba6e1e6ee69579ecdb61b217f4f4f61f0b0baff1fd8408e164" Feb 02 06:51:12 crc kubenswrapper[4842]: E0202 06:51:12.319094 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4762ff727f3a29ba6e1e6ee69579ecdb61b217f4f4f61f0b0baff1fd8408e164\": container with ID starting with 4762ff727f3a29ba6e1e6ee69579ecdb61b217f4f4f61f0b0baff1fd8408e164 not found: ID does not exist" containerID="4762ff727f3a29ba6e1e6ee69579ecdb61b217f4f4f61f0b0baff1fd8408e164" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.319134 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4762ff727f3a29ba6e1e6ee69579ecdb61b217f4f4f61f0b0baff1fd8408e164"} err="failed to get container status \"4762ff727f3a29ba6e1e6ee69579ecdb61b217f4f4f61f0b0baff1fd8408e164\": rpc error: code = NotFound desc = could not find container \"4762ff727f3a29ba6e1e6ee69579ecdb61b217f4f4f61f0b0baff1fd8408e164\": container with ID starting with 4762ff727f3a29ba6e1e6ee69579ecdb61b217f4f4f61f0b0baff1fd8408e164 not found: ID does not exist" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.479388 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2j5m"] Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.485374 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2j5m"] Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.489981 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-74vp9"] Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.494721 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-74vp9"] Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.498280 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5l5m7"] Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.501958 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5l5m7"] Feb 02 06:51:13 crc kubenswrapper[4842]: I0202 06:51:13.446194 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="671957e9-c40d-416d-8756-a4d7f0abc317" path="/var/lib/kubelet/pods/671957e9-c40d-416d-8756-a4d7f0abc317/volumes" Feb 02 06:51:13 crc kubenswrapper[4842]: I0202 06:51:13.447476 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69e94ec9-2a3b-4f85-a2b7-9e2f07359890" path="/var/lib/kubelet/pods/69e94ec9-2a3b-4f85-a2b7-9e2f07359890/volumes" Feb 02 06:51:13 crc kubenswrapper[4842]: I0202 06:51:13.448598 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99088cf9-5dcc-4837-943b-4deca45c1401" path="/var/lib/kubelet/pods/99088cf9-5dcc-4837-943b-4deca45c1401/volumes" Feb 02 06:51:13 crc kubenswrapper[4842]: I0202 06:51:13.450725 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4f753a1-ecf0-4b2c-9121-989677c6b2a6" path="/var/lib/kubelet/pods/c4f753a1-ecf0-4b2c-9121-989677c6b2a6/volumes" Feb 02 06:51:13 crc kubenswrapper[4842]: I0202 06:51:13.451613 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de569fea-56ca-4762-9a22-a12561c296b6" path="/var/lib/kubelet/pods/de569fea-56ca-4762-9a22-a12561c296b6/volumes" Feb 02 06:51:15 crc kubenswrapper[4842]: I0202 06:51:15.195310 4842 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 02 06:51:26 crc kubenswrapper[4842]: I0202 06:51:26.177270 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 02 06:51:26 crc kubenswrapper[4842]: I0202 06:51:26.182121 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 06:51:26 crc kubenswrapper[4842]: I0202 06:51:26.182202 4842 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="f031981f643f5d87b4def10d3e2db442ecf61d86a5b06ab2a2c7e39a48be9b60" exitCode=137 Feb 02 06:51:26 crc kubenswrapper[4842]: I0202 06:51:26.182287 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"f031981f643f5d87b4def10d3e2db442ecf61d86a5b06ab2a2c7e39a48be9b60"} Feb 02 06:51:26 crc kubenswrapper[4842]: I0202 06:51:26.182341 4842 scope.go:117] "RemoveContainer" containerID="2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf" Feb 02 06:51:27 crc kubenswrapper[4842]: I0202 06:51:27.192812 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 02 06:51:27 crc kubenswrapper[4842]: I0202 06:51:27.196397 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"78da50ea86651ac25aa1e24a46dbb6da9b002e43bb0d9c6ca3d0e83131eb7c66"} Feb 02 06:51:31 crc kubenswrapper[4842]: I0202 06:51:31.898626 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:51:35 crc kubenswrapper[4842]: I0202 06:51:35.604406 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:51:35 crc kubenswrapper[4842]: I0202 06:51:35.610572 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:51:36 crc kubenswrapper[4842]: I0202 06:51:36.254119 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:51:42 crc kubenswrapper[4842]: I0202 06:51:42.146526 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 06:51:42 crc kubenswrapper[4842]: I0202 06:51:42.146835 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.282525 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vbb7f"] Feb 02 06:51:44 crc kubenswrapper[4842]: E0202 06:51:44.283063 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de569fea-56ca-4762-9a22-a12561c296b6" containerName="extract-content" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283078 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="de569fea-56ca-4762-9a22-a12561c296b6" containerName="extract-content" Feb 02 06:51:44 crc kubenswrapper[4842]: E0202 06:51:44.283090 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de569fea-56ca-4762-9a22-a12561c296b6" containerName="registry-server" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283098 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="de569fea-56ca-4762-9a22-a12561c296b6" containerName="registry-server" Feb 02 06:51:44 crc kubenswrapper[4842]: E0202 06:51:44.283112 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e94ec9-2a3b-4f85-a2b7-9e2f07359890" containerName="registry-server" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283120 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e94ec9-2a3b-4f85-a2b7-9e2f07359890" containerName="registry-server" Feb 02 06:51:44 crc kubenswrapper[4842]: E0202 06:51:44.283133 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea82b6bc-5c1e-496e-8501-45fdb7220cbb" containerName="installer" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283142 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea82b6bc-5c1e-496e-8501-45fdb7220cbb" containerName="installer" Feb 02 06:51:44 crc kubenswrapper[4842]: E0202 06:51:44.283153 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e94ec9-2a3b-4f85-a2b7-9e2f07359890" containerName="extract-utilities" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283161 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e94ec9-2a3b-4f85-a2b7-9e2f07359890" containerName="extract-utilities" Feb 02 06:51:44 crc kubenswrapper[4842]: E0202 06:51:44.283173 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99088cf9-5dcc-4837-943b-4deca45c1401" containerName="registry-server" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283180 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="99088cf9-5dcc-4837-943b-4deca45c1401" containerName="registry-server" Feb 02 06:51:44 crc kubenswrapper[4842]: E0202 06:51:44.283189 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f753a1-ecf0-4b2c-9121-989677c6b2a6" containerName="marketplace-operator" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283197 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f753a1-ecf0-4b2c-9121-989677c6b2a6" containerName="marketplace-operator" Feb 02 06:51:44 crc kubenswrapper[4842]: E0202 06:51:44.283207 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99088cf9-5dcc-4837-943b-4deca45c1401" containerName="extract-content" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283234 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="99088cf9-5dcc-4837-943b-4deca45c1401" containerName="extract-content" Feb 02 06:51:44 crc kubenswrapper[4842]: E0202 06:51:44.283249 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="671957e9-c40d-416d-8756-a4d7f0abc317" containerName="registry-server" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283259 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="671957e9-c40d-416d-8756-a4d7f0abc317" containerName="registry-server" Feb 02 06:51:44 crc kubenswrapper[4842]: E0202 06:51:44.283269 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99088cf9-5dcc-4837-943b-4deca45c1401" containerName="extract-utilities" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283276 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="99088cf9-5dcc-4837-943b-4deca45c1401" containerName="extract-utilities" Feb 02 06:51:44 crc kubenswrapper[4842]: E0202 06:51:44.283287 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e94ec9-2a3b-4f85-a2b7-9e2f07359890" containerName="extract-content" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283294 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e94ec9-2a3b-4f85-a2b7-9e2f07359890" containerName="extract-content" Feb 02 06:51:44 crc kubenswrapper[4842]: E0202 06:51:44.283304 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="671957e9-c40d-416d-8756-a4d7f0abc317" containerName="extract-content" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283312 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="671957e9-c40d-416d-8756-a4d7f0abc317" containerName="extract-content" Feb 02 06:51:44 crc kubenswrapper[4842]: E0202 06:51:44.283324 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283331 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 06:51:44 crc kubenswrapper[4842]: E0202 06:51:44.283340 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de569fea-56ca-4762-9a22-a12561c296b6" containerName="extract-utilities" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283349 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="de569fea-56ca-4762-9a22-a12561c296b6" containerName="extract-utilities" Feb 02 06:51:44 crc kubenswrapper[4842]: E0202 06:51:44.283358 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="671957e9-c40d-416d-8756-a4d7f0abc317" containerName="extract-utilities" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283367 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="671957e9-c40d-416d-8756-a4d7f0abc317" containerName="extract-utilities" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283474 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="de569fea-56ca-4762-9a22-a12561c296b6" containerName="registry-server" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283485 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4f753a1-ecf0-4b2c-9121-989677c6b2a6" containerName="marketplace-operator" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283495 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e94ec9-2a3b-4f85-a2b7-9e2f07359890" containerName="registry-server" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283509 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="671957e9-c40d-416d-8756-a4d7f0abc317" containerName="registry-server" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283526 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283541 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea82b6bc-5c1e-496e-8501-45fdb7220cbb" containerName="installer" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283551 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="99088cf9-5dcc-4837-943b-4deca45c1401" containerName="registry-server" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283946 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.285525 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.286122 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.287490 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.291982 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vbb7f"] Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.293719 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.294763 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.394370 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57f599bc-2735-4763-8510-fe623d36bd10-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vbb7f\" (UID: \"57f599bc-2735-4763-8510-fe623d36bd10\") " pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.394639 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t2zw\" (UniqueName: \"kubernetes.io/projected/57f599bc-2735-4763-8510-fe623d36bd10-kube-api-access-8t2zw\") pod \"marketplace-operator-79b997595-vbb7f\" (UID: \"57f599bc-2735-4763-8510-fe623d36bd10\") " pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.394783 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/57f599bc-2735-4763-8510-fe623d36bd10-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vbb7f\" (UID: \"57f599bc-2735-4763-8510-fe623d36bd10\") " pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.496274 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t2zw\" (UniqueName: \"kubernetes.io/projected/57f599bc-2735-4763-8510-fe623d36bd10-kube-api-access-8t2zw\") pod \"marketplace-operator-79b997595-vbb7f\" (UID: \"57f599bc-2735-4763-8510-fe623d36bd10\") " pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.496532 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/57f599bc-2735-4763-8510-fe623d36bd10-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vbb7f\" (UID: \"57f599bc-2735-4763-8510-fe623d36bd10\") " pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.496699 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57f599bc-2735-4763-8510-fe623d36bd10-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vbb7f\" (UID: \"57f599bc-2735-4763-8510-fe623d36bd10\") " pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.498560 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57f599bc-2735-4763-8510-fe623d36bd10-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vbb7f\" (UID: \"57f599bc-2735-4763-8510-fe623d36bd10\") " pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.505866 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/57f599bc-2735-4763-8510-fe623d36bd10-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vbb7f\" (UID: \"57f599bc-2735-4763-8510-fe623d36bd10\") " pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.522465 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t2zw\" (UniqueName: \"kubernetes.io/projected/57f599bc-2735-4763-8510-fe623d36bd10-kube-api-access-8t2zw\") pod \"marketplace-operator-79b997595-vbb7f\" (UID: \"57f599bc-2735-4763-8510-fe623d36bd10\") " pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.606464 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" Feb 02 06:51:45 crc kubenswrapper[4842]: I0202 06:51:45.079901 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vbb7f"] Feb 02 06:51:45 crc kubenswrapper[4842]: W0202 06:51:45.088521 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57f599bc_2735_4763_8510_fe623d36bd10.slice/crio-ca0356da044adbef390e90e20938fe72bb67a46c8b459ab50af603074356bcf7 WatchSource:0}: Error finding container ca0356da044adbef390e90e20938fe72bb67a46c8b459ab50af603074356bcf7: Status 404 returned error can't find the container with id ca0356da044adbef390e90e20938fe72bb67a46c8b459ab50af603074356bcf7 Feb 02 06:51:45 crc kubenswrapper[4842]: I0202 06:51:45.312639 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" event={"ID":"57f599bc-2735-4763-8510-fe623d36bd10","Type":"ContainerStarted","Data":"5a028b56f6be560eecf683452dffe8b0b1a412dcbff418e49682824a67abab0c"} Feb 02 06:51:45 crc kubenswrapper[4842]: I0202 06:51:45.312999 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" Feb 02 06:51:45 crc kubenswrapper[4842]: I0202 06:51:45.313011 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" event={"ID":"57f599bc-2735-4763-8510-fe623d36bd10","Type":"ContainerStarted","Data":"ca0356da044adbef390e90e20938fe72bb67a46c8b459ab50af603074356bcf7"} Feb 02 06:51:45 crc kubenswrapper[4842]: I0202 06:51:45.314320 4842 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vbb7f container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/healthz\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Feb 02 06:51:45 crc kubenswrapper[4842]: I0202 06:51:45.314389 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" podUID="57f599bc-2735-4763-8510-fe623d36bd10" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.66:8080/healthz\": dial tcp 10.217.0.66:8080: connect: connection refused" Feb 02 06:51:45 crc kubenswrapper[4842]: I0202 06:51:45.327436 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" podStartSLOduration=1.327415117 podStartE2EDuration="1.327415117s" podCreationTimestamp="2026-02-02 06:51:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:51:45.32592974 +0000 UTC m=+330.703197672" watchObservedRunningTime="2026-02-02 06:51:45.327415117 +0000 UTC m=+330.704683039" Feb 02 06:51:46 crc kubenswrapper[4842]: I0202 06:51:46.322413 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" Feb 02 06:52:12 crc kubenswrapper[4842]: I0202 06:52:12.146405 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 06:52:12 crc kubenswrapper[4842]: I0202 06:52:12.147123 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 06:52:36 crc kubenswrapper[4842]: I0202 06:52:36.905388 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sw8ll"] Feb 02 06:52:36 crc kubenswrapper[4842]: I0202 06:52:36.908468 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sw8ll" Feb 02 06:52:36 crc kubenswrapper[4842]: I0202 06:52:36.913146 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 06:52:36 crc kubenswrapper[4842]: I0202 06:52:36.928897 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sw8ll"] Feb 02 06:52:36 crc kubenswrapper[4842]: I0202 06:52:36.943959 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ea1df1c-0a15-44a8-9bb6-9f4513c3b482-utilities\") pod \"redhat-marketplace-sw8ll\" (UID: \"7ea1df1c-0a15-44a8-9bb6-9f4513c3b482\") " pod="openshift-marketplace/redhat-marketplace-sw8ll" Feb 02 06:52:36 crc kubenswrapper[4842]: I0202 06:52:36.944059 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ea1df1c-0a15-44a8-9bb6-9f4513c3b482-catalog-content\") pod \"redhat-marketplace-sw8ll\" (UID: \"7ea1df1c-0a15-44a8-9bb6-9f4513c3b482\") " pod="openshift-marketplace/redhat-marketplace-sw8ll" Feb 02 06:52:36 crc kubenswrapper[4842]: I0202 06:52:36.944132 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2kkx\" (UniqueName: \"kubernetes.io/projected/7ea1df1c-0a15-44a8-9bb6-9f4513c3b482-kube-api-access-f2kkx\") pod \"redhat-marketplace-sw8ll\" (UID: \"7ea1df1c-0a15-44a8-9bb6-9f4513c3b482\") " pod="openshift-marketplace/redhat-marketplace-sw8ll" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.045638 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ea1df1c-0a15-44a8-9bb6-9f4513c3b482-utilities\") pod \"redhat-marketplace-sw8ll\" (UID: \"7ea1df1c-0a15-44a8-9bb6-9f4513c3b482\") " pod="openshift-marketplace/redhat-marketplace-sw8ll" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.045728 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ea1df1c-0a15-44a8-9bb6-9f4513c3b482-catalog-content\") pod \"redhat-marketplace-sw8ll\" (UID: \"7ea1df1c-0a15-44a8-9bb6-9f4513c3b482\") " pod="openshift-marketplace/redhat-marketplace-sw8ll" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.045819 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2kkx\" (UniqueName: \"kubernetes.io/projected/7ea1df1c-0a15-44a8-9bb6-9f4513c3b482-kube-api-access-f2kkx\") pod \"redhat-marketplace-sw8ll\" (UID: \"7ea1df1c-0a15-44a8-9bb6-9f4513c3b482\") " pod="openshift-marketplace/redhat-marketplace-sw8ll" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.046678 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ea1df1c-0a15-44a8-9bb6-9f4513c3b482-catalog-content\") pod \"redhat-marketplace-sw8ll\" (UID: \"7ea1df1c-0a15-44a8-9bb6-9f4513c3b482\") " pod="openshift-marketplace/redhat-marketplace-sw8ll" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.046929 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ea1df1c-0a15-44a8-9bb6-9f4513c3b482-utilities\") pod \"redhat-marketplace-sw8ll\" (UID: \"7ea1df1c-0a15-44a8-9bb6-9f4513c3b482\") " pod="openshift-marketplace/redhat-marketplace-sw8ll" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.079585 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l6tg7"] Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.081320 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l6tg7" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.084321 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.095815 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2kkx\" (UniqueName: \"kubernetes.io/projected/7ea1df1c-0a15-44a8-9bb6-9f4513c3b482-kube-api-access-f2kkx\") pod \"redhat-marketplace-sw8ll\" (UID: \"7ea1df1c-0a15-44a8-9bb6-9f4513c3b482\") " pod="openshift-marketplace/redhat-marketplace-sw8ll" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.108132 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l6tg7"] Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.147263 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23620448-86fc-4fa7-9295-d9ce6de9b8e6-utilities\") pod \"redhat-operators-l6tg7\" (UID: \"23620448-86fc-4fa7-9295-d9ce6de9b8e6\") " pod="openshift-marketplace/redhat-operators-l6tg7" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.147794 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23620448-86fc-4fa7-9295-d9ce6de9b8e6-catalog-content\") pod \"redhat-operators-l6tg7\" (UID: \"23620448-86fc-4fa7-9295-d9ce6de9b8e6\") " pod="openshift-marketplace/redhat-operators-l6tg7" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.148013 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm5ph\" (UniqueName: \"kubernetes.io/projected/23620448-86fc-4fa7-9295-d9ce6de9b8e6-kube-api-access-xm5ph\") pod \"redhat-operators-l6tg7\" (UID: \"23620448-86fc-4fa7-9295-d9ce6de9b8e6\") " pod="openshift-marketplace/redhat-operators-l6tg7" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.245161 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sw8ll" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.249379 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm5ph\" (UniqueName: \"kubernetes.io/projected/23620448-86fc-4fa7-9295-d9ce6de9b8e6-kube-api-access-xm5ph\") pod \"redhat-operators-l6tg7\" (UID: \"23620448-86fc-4fa7-9295-d9ce6de9b8e6\") " pod="openshift-marketplace/redhat-operators-l6tg7" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.249421 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23620448-86fc-4fa7-9295-d9ce6de9b8e6-utilities\") pod \"redhat-operators-l6tg7\" (UID: \"23620448-86fc-4fa7-9295-d9ce6de9b8e6\") " pod="openshift-marketplace/redhat-operators-l6tg7" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.249447 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23620448-86fc-4fa7-9295-d9ce6de9b8e6-catalog-content\") pod \"redhat-operators-l6tg7\" (UID: \"23620448-86fc-4fa7-9295-d9ce6de9b8e6\") " pod="openshift-marketplace/redhat-operators-l6tg7" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.249983 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23620448-86fc-4fa7-9295-d9ce6de9b8e6-catalog-content\") pod \"redhat-operators-l6tg7\" (UID: \"23620448-86fc-4fa7-9295-d9ce6de9b8e6\") " pod="openshift-marketplace/redhat-operators-l6tg7" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.250268 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23620448-86fc-4fa7-9295-d9ce6de9b8e6-utilities\") pod \"redhat-operators-l6tg7\" (UID: \"23620448-86fc-4fa7-9295-d9ce6de9b8e6\") " pod="openshift-marketplace/redhat-operators-l6tg7" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.282521 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm5ph\" (UniqueName: \"kubernetes.io/projected/23620448-86fc-4fa7-9295-d9ce6de9b8e6-kube-api-access-xm5ph\") pod \"redhat-operators-l6tg7\" (UID: \"23620448-86fc-4fa7-9295-d9ce6de9b8e6\") " pod="openshift-marketplace/redhat-operators-l6tg7" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.431036 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l6tg7" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.748318 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sw8ll"] Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.820543 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l6tg7"] Feb 02 06:52:37 crc kubenswrapper[4842]: W0202 06:52:37.828725 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23620448_86fc_4fa7_9295_d9ce6de9b8e6.slice/crio-95ca99b21910606d2c47650eca9e96c16490efb370f37a1468d70d2d95cf5ebf WatchSource:0}: Error finding container 95ca99b21910606d2c47650eca9e96c16490efb370f37a1468d70d2d95cf5ebf: Status 404 returned error can't find the container with id 95ca99b21910606d2c47650eca9e96c16490efb370f37a1468d70d2d95cf5ebf Feb 02 06:52:38 crc kubenswrapper[4842]: I0202 06:52:38.672450 4842 generic.go:334] "Generic (PLEG): container finished" podID="7ea1df1c-0a15-44a8-9bb6-9f4513c3b482" containerID="202eb0ed13787963a10bf55283d9c4e45e11b412b59ad5ec22d40c596f942361" exitCode=0 Feb 02 06:52:38 crc kubenswrapper[4842]: I0202 06:52:38.672601 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sw8ll" event={"ID":"7ea1df1c-0a15-44a8-9bb6-9f4513c3b482","Type":"ContainerDied","Data":"202eb0ed13787963a10bf55283d9c4e45e11b412b59ad5ec22d40c596f942361"} Feb 02 06:52:38 crc kubenswrapper[4842]: I0202 06:52:38.672677 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sw8ll" event={"ID":"7ea1df1c-0a15-44a8-9bb6-9f4513c3b482","Type":"ContainerStarted","Data":"fa6fd8a0c06a34b0ca6d89f9c1b466f8f03ab7ac5f66075a426e545a2a8336a1"} Feb 02 06:52:38 crc kubenswrapper[4842]: I0202 06:52:38.677609 4842 generic.go:334] "Generic (PLEG): container finished" podID="23620448-86fc-4fa7-9295-d9ce6de9b8e6" containerID="463ca7b2922b0b1e47b5a4f43563c0d021fbbe0f59f263c3790edf02314dc179" exitCode=0 Feb 02 06:52:38 crc kubenswrapper[4842]: I0202 06:52:38.677672 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6tg7" event={"ID":"23620448-86fc-4fa7-9295-d9ce6de9b8e6","Type":"ContainerDied","Data":"463ca7b2922b0b1e47b5a4f43563c0d021fbbe0f59f263c3790edf02314dc179"} Feb 02 06:52:38 crc kubenswrapper[4842]: I0202 06:52:38.677694 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6tg7" event={"ID":"23620448-86fc-4fa7-9295-d9ce6de9b8e6","Type":"ContainerStarted","Data":"95ca99b21910606d2c47650eca9e96c16490efb370f37a1468d70d2d95cf5ebf"} Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.277977 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cbwzh"] Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.280838 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.284889 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.285056 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9969706e-304c-490a-b15d-7d0bfc99261c-utilities\") pod \"certified-operators-cbwzh\" (UID: \"9969706e-304c-490a-b15d-7d0bfc99261c\") " pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.285177 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvsxr\" (UniqueName: \"kubernetes.io/projected/9969706e-304c-490a-b15d-7d0bfc99261c-kube-api-access-tvsxr\") pod \"certified-operators-cbwzh\" (UID: \"9969706e-304c-490a-b15d-7d0bfc99261c\") " pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.285254 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9969706e-304c-490a-b15d-7d0bfc99261c-catalog-content\") pod \"certified-operators-cbwzh\" (UID: \"9969706e-304c-490a-b15d-7d0bfc99261c\") " pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.301607 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cbwzh"] Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.385878 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvsxr\" (UniqueName: \"kubernetes.io/projected/9969706e-304c-490a-b15d-7d0bfc99261c-kube-api-access-tvsxr\") pod \"certified-operators-cbwzh\" (UID: \"9969706e-304c-490a-b15d-7d0bfc99261c\") " pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.385934 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9969706e-304c-490a-b15d-7d0bfc99261c-catalog-content\") pod \"certified-operators-cbwzh\" (UID: \"9969706e-304c-490a-b15d-7d0bfc99261c\") " pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.385992 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9969706e-304c-490a-b15d-7d0bfc99261c-utilities\") pod \"certified-operators-cbwzh\" (UID: \"9969706e-304c-490a-b15d-7d0bfc99261c\") " pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.386531 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9969706e-304c-490a-b15d-7d0bfc99261c-utilities\") pod \"certified-operators-cbwzh\" (UID: \"9969706e-304c-490a-b15d-7d0bfc99261c\") " pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.387113 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9969706e-304c-490a-b15d-7d0bfc99261c-catalog-content\") pod \"certified-operators-cbwzh\" (UID: \"9969706e-304c-490a-b15d-7d0bfc99261c\") " pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.420146 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvsxr\" (UniqueName: \"kubernetes.io/projected/9969706e-304c-490a-b15d-7d0bfc99261c-kube-api-access-tvsxr\") pod \"certified-operators-cbwzh\" (UID: \"9969706e-304c-490a-b15d-7d0bfc99261c\") " pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.486414 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7hg8l"] Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.488383 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7hg8l" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.492073 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.499161 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7hg8l"] Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.587813 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79d21de2-d86f-4434-a132-ac1e81b63377-catalog-content\") pod \"community-operators-7hg8l\" (UID: \"79d21de2-d86f-4434-a132-ac1e81b63377\") " pod="openshift-marketplace/community-operators-7hg8l" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.587854 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79d21de2-d86f-4434-a132-ac1e81b63377-utilities\") pod \"community-operators-7hg8l\" (UID: \"79d21de2-d86f-4434-a132-ac1e81b63377\") " pod="openshift-marketplace/community-operators-7hg8l" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.587892 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfhk4\" (UniqueName: \"kubernetes.io/projected/79d21de2-d86f-4434-a132-ac1e81b63377-kube-api-access-dfhk4\") pod \"community-operators-7hg8l\" (UID: \"79d21de2-d86f-4434-a132-ac1e81b63377\") " pod="openshift-marketplace/community-operators-7hg8l" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.618966 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.686283 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6tg7" event={"ID":"23620448-86fc-4fa7-9295-d9ce6de9b8e6","Type":"ContainerStarted","Data":"9727ff0e3a5e00814bb179b8ed20d49caf1473e2400b6a7045c76b5ee6d4faf7"} Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.688477 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfhk4\" (UniqueName: \"kubernetes.io/projected/79d21de2-d86f-4434-a132-ac1e81b63377-kube-api-access-dfhk4\") pod \"community-operators-7hg8l\" (UID: \"79d21de2-d86f-4434-a132-ac1e81b63377\") " pod="openshift-marketplace/community-operators-7hg8l" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.688630 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79d21de2-d86f-4434-a132-ac1e81b63377-catalog-content\") pod \"community-operators-7hg8l\" (UID: \"79d21de2-d86f-4434-a132-ac1e81b63377\") " pod="openshift-marketplace/community-operators-7hg8l" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.688673 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79d21de2-d86f-4434-a132-ac1e81b63377-utilities\") pod \"community-operators-7hg8l\" (UID: \"79d21de2-d86f-4434-a132-ac1e81b63377\") " pod="openshift-marketplace/community-operators-7hg8l" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.689591 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79d21de2-d86f-4434-a132-ac1e81b63377-utilities\") pod \"community-operators-7hg8l\" (UID: \"79d21de2-d86f-4434-a132-ac1e81b63377\") " pod="openshift-marketplace/community-operators-7hg8l" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.690060 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79d21de2-d86f-4434-a132-ac1e81b63377-catalog-content\") pod \"community-operators-7hg8l\" (UID: \"79d21de2-d86f-4434-a132-ac1e81b63377\") " pod="openshift-marketplace/community-operators-7hg8l" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.690265 4842 generic.go:334] "Generic (PLEG): container finished" podID="7ea1df1c-0a15-44a8-9bb6-9f4513c3b482" containerID="0011302c13329c7c74cf16d15cd5f5d4701095d6cd3bafecc836bf320d978a43" exitCode=0 Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.690309 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sw8ll" event={"ID":"7ea1df1c-0a15-44a8-9bb6-9f4513c3b482","Type":"ContainerDied","Data":"0011302c13329c7c74cf16d15cd5f5d4701095d6cd3bafecc836bf320d978a43"} Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.728902 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfhk4\" (UniqueName: \"kubernetes.io/projected/79d21de2-d86f-4434-a132-ac1e81b63377-kube-api-access-dfhk4\") pod \"community-operators-7hg8l\" (UID: \"79d21de2-d86f-4434-a132-ac1e81b63377\") " pod="openshift-marketplace/community-operators-7hg8l" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.822903 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7hg8l" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.861428 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cbwzh"] Feb 02 06:52:39 crc kubenswrapper[4842]: W0202 06:52:39.869300 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9969706e_304c_490a_b15d_7d0bfc99261c.slice/crio-87da024578fe003edad40db056fe8ec4f30280deba8415eb825b3aeb82ca3997 WatchSource:0}: Error finding container 87da024578fe003edad40db056fe8ec4f30280deba8415eb825b3aeb82ca3997: Status 404 returned error can't find the container with id 87da024578fe003edad40db056fe8ec4f30280deba8415eb825b3aeb82ca3997 Feb 02 06:52:40 crc kubenswrapper[4842]: I0202 06:52:40.057791 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7hg8l"] Feb 02 06:52:40 crc kubenswrapper[4842]: W0202 06:52:40.086862 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79d21de2_d86f_4434_a132_ac1e81b63377.slice/crio-2d2ab29782781bce630b9b1ec33d723639705b917f6488a85a84e3a08847027a WatchSource:0}: Error finding container 2d2ab29782781bce630b9b1ec33d723639705b917f6488a85a84e3a08847027a: Status 404 returned error can't find the container with id 2d2ab29782781bce630b9b1ec33d723639705b917f6488a85a84e3a08847027a Feb 02 06:52:40 crc kubenswrapper[4842]: I0202 06:52:40.697622 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sw8ll" event={"ID":"7ea1df1c-0a15-44a8-9bb6-9f4513c3b482","Type":"ContainerStarted","Data":"54dec166f57b910e181cdf37ff3f59c04c4e26cfb2b9d16cebee45ff070289b6"} Feb 02 06:52:40 crc kubenswrapper[4842]: I0202 06:52:40.698687 4842 generic.go:334] "Generic (PLEG): container finished" podID="9969706e-304c-490a-b15d-7d0bfc99261c" containerID="cdc5b57eaa471b1df4736cdcd50fb5f9ddf54fbd99f33734d0e692fc9f77a97f" exitCode=0 Feb 02 06:52:40 crc kubenswrapper[4842]: I0202 06:52:40.698740 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbwzh" event={"ID":"9969706e-304c-490a-b15d-7d0bfc99261c","Type":"ContainerDied","Data":"cdc5b57eaa471b1df4736cdcd50fb5f9ddf54fbd99f33734d0e692fc9f77a97f"} Feb 02 06:52:40 crc kubenswrapper[4842]: I0202 06:52:40.698758 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbwzh" event={"ID":"9969706e-304c-490a-b15d-7d0bfc99261c","Type":"ContainerStarted","Data":"87da024578fe003edad40db056fe8ec4f30280deba8415eb825b3aeb82ca3997"} Feb 02 06:52:40 crc kubenswrapper[4842]: I0202 06:52:40.701072 4842 generic.go:334] "Generic (PLEG): container finished" podID="79d21de2-d86f-4434-a132-ac1e81b63377" containerID="29c357120ba115af17ef113f35ab6e72d332e8c44501980f8bf1853410154a74" exitCode=0 Feb 02 06:52:40 crc kubenswrapper[4842]: I0202 06:52:40.701120 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hg8l" event={"ID":"79d21de2-d86f-4434-a132-ac1e81b63377","Type":"ContainerDied","Data":"29c357120ba115af17ef113f35ab6e72d332e8c44501980f8bf1853410154a74"} Feb 02 06:52:40 crc kubenswrapper[4842]: I0202 06:52:40.701140 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hg8l" event={"ID":"79d21de2-d86f-4434-a132-ac1e81b63377","Type":"ContainerStarted","Data":"2d2ab29782781bce630b9b1ec33d723639705b917f6488a85a84e3a08847027a"} Feb 02 06:52:40 crc kubenswrapper[4842]: I0202 06:52:40.705509 4842 generic.go:334] "Generic (PLEG): container finished" podID="23620448-86fc-4fa7-9295-d9ce6de9b8e6" containerID="9727ff0e3a5e00814bb179b8ed20d49caf1473e2400b6a7045c76b5ee6d4faf7" exitCode=0 Feb 02 06:52:40 crc kubenswrapper[4842]: I0202 06:52:40.705664 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6tg7" event={"ID":"23620448-86fc-4fa7-9295-d9ce6de9b8e6","Type":"ContainerDied","Data":"9727ff0e3a5e00814bb179b8ed20d49caf1473e2400b6a7045c76b5ee6d4faf7"} Feb 02 06:52:40 crc kubenswrapper[4842]: I0202 06:52:40.724646 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sw8ll" podStartSLOduration=3.314480442 podStartE2EDuration="4.724620025s" podCreationTimestamp="2026-02-02 06:52:36 +0000 UTC" firstStartedPulling="2026-02-02 06:52:38.674645229 +0000 UTC m=+384.051913151" lastFinishedPulling="2026-02-02 06:52:40.084784822 +0000 UTC m=+385.462052734" observedRunningTime="2026-02-02 06:52:40.7207987 +0000 UTC m=+386.098066612" watchObservedRunningTime="2026-02-02 06:52:40.724620025 +0000 UTC m=+386.101887967" Feb 02 06:52:41 crc kubenswrapper[4842]: I0202 06:52:41.713057 4842 generic.go:334] "Generic (PLEG): container finished" podID="79d21de2-d86f-4434-a132-ac1e81b63377" containerID="0c604a9a803c123935122e17db80cd4fc1952e426889feeace08fef5229b2809" exitCode=0 Feb 02 06:52:41 crc kubenswrapper[4842]: I0202 06:52:41.713294 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hg8l" event={"ID":"79d21de2-d86f-4434-a132-ac1e81b63377","Type":"ContainerDied","Data":"0c604a9a803c123935122e17db80cd4fc1952e426889feeace08fef5229b2809"} Feb 02 06:52:41 crc kubenswrapper[4842]: I0202 06:52:41.718449 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6tg7" event={"ID":"23620448-86fc-4fa7-9295-d9ce6de9b8e6","Type":"ContainerStarted","Data":"105268e6936de62f4c5db8f06e036fa59b9f99d9a1c12f936125be1f6dcb0eaa"} Feb 02 06:52:41 crc kubenswrapper[4842]: I0202 06:52:41.724584 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbwzh" event={"ID":"9969706e-304c-490a-b15d-7d0bfc99261c","Type":"ContainerStarted","Data":"308b61160ba5e467d88f1ac70bd85a0adb7d7b33d6c1eb5a0233036f6970dc7b"} Feb 02 06:52:41 crc kubenswrapper[4842]: I0202 06:52:41.765663 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l6tg7" podStartSLOduration=2.312734849 podStartE2EDuration="4.765644104s" podCreationTimestamp="2026-02-02 06:52:37 +0000 UTC" firstStartedPulling="2026-02-02 06:52:38.679097289 +0000 UTC m=+384.056365211" lastFinishedPulling="2026-02-02 06:52:41.132006514 +0000 UTC m=+386.509274466" observedRunningTime="2026-02-02 06:52:41.762877016 +0000 UTC m=+387.140144948" watchObservedRunningTime="2026-02-02 06:52:41.765644104 +0000 UTC m=+387.142912016" Feb 02 06:52:42 crc kubenswrapper[4842]: I0202 06:52:42.146344 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 06:52:42 crc kubenswrapper[4842]: I0202 06:52:42.146439 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 06:52:42 crc kubenswrapper[4842]: I0202 06:52:42.146523 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:52:42 crc kubenswrapper[4842]: I0202 06:52:42.147505 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"26f863875b25adddb851bd7939cdd2a355f863cc15cc7b84383d70ddfd11cabb"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 06:52:42 crc kubenswrapper[4842]: I0202 06:52:42.147655 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://26f863875b25adddb851bd7939cdd2a355f863cc15cc7b84383d70ddfd11cabb" gracePeriod=600 Feb 02 06:52:42 crc kubenswrapper[4842]: I0202 06:52:42.729987 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="26f863875b25adddb851bd7939cdd2a355f863cc15cc7b84383d70ddfd11cabb" exitCode=0 Feb 02 06:52:42 crc kubenswrapper[4842]: I0202 06:52:42.730078 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"26f863875b25adddb851bd7939cdd2a355f863cc15cc7b84383d70ddfd11cabb"} Feb 02 06:52:42 crc kubenswrapper[4842]: I0202 06:52:42.730338 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"5170675f524a0cbf4768ef91dd8be4f2ac17b44f3012bcf35bd18ead443e0d00"} Feb 02 06:52:42 crc kubenswrapper[4842]: I0202 06:52:42.730363 4842 scope.go:117] "RemoveContainer" containerID="b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5" Feb 02 06:52:42 crc kubenswrapper[4842]: I0202 06:52:42.733338 4842 generic.go:334] "Generic (PLEG): container finished" podID="9969706e-304c-490a-b15d-7d0bfc99261c" containerID="308b61160ba5e467d88f1ac70bd85a0adb7d7b33d6c1eb5a0233036f6970dc7b" exitCode=0 Feb 02 06:52:42 crc kubenswrapper[4842]: I0202 06:52:42.733526 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbwzh" event={"ID":"9969706e-304c-490a-b15d-7d0bfc99261c","Type":"ContainerDied","Data":"308b61160ba5e467d88f1ac70bd85a0adb7d7b33d6c1eb5a0233036f6970dc7b"} Feb 02 06:52:42 crc kubenswrapper[4842]: I0202 06:52:42.737286 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hg8l" event={"ID":"79d21de2-d86f-4434-a132-ac1e81b63377","Type":"ContainerStarted","Data":"05f81fbc41c88618dbdb1297884184318cd51122953e7bb58e8a90a529418d52"} Feb 02 06:52:42 crc kubenswrapper[4842]: I0202 06:52:42.802468 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7hg8l" podStartSLOduration=2.377048381 podStartE2EDuration="3.80243768s" podCreationTimestamp="2026-02-02 06:52:39 +0000 UTC" firstStartedPulling="2026-02-02 06:52:40.702187431 +0000 UTC m=+386.079455343" lastFinishedPulling="2026-02-02 06:52:42.12757671 +0000 UTC m=+387.504844642" observedRunningTime="2026-02-02 06:52:42.796594695 +0000 UTC m=+388.173862647" watchObservedRunningTime="2026-02-02 06:52:42.80243768 +0000 UTC m=+388.179705642" Feb 02 06:52:42 crc kubenswrapper[4842]: I0202 06:52:42.975382 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nzdms"] Feb 02 06:52:42 crc kubenswrapper[4842]: I0202 06:52:42.975977 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:42 crc kubenswrapper[4842]: I0202 06:52:42.990798 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nzdms"] Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.127791 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/19ce6df2-ffac-4035-8737-e17bebecbf03-registry-certificates\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.127840 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/19ce6df2-ffac-4035-8737-e17bebecbf03-registry-tls\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.127877 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/19ce6df2-ffac-4035-8737-e17bebecbf03-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.127899 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/19ce6df2-ffac-4035-8737-e17bebecbf03-trusted-ca\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.128038 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7mx7\" (UniqueName: \"kubernetes.io/projected/19ce6df2-ffac-4035-8737-e17bebecbf03-kube-api-access-w7mx7\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.128148 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/19ce6df2-ffac-4035-8737-e17bebecbf03-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.128193 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/19ce6df2-ffac-4035-8737-e17bebecbf03-bound-sa-token\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.128292 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.160645 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.229419 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/19ce6df2-ffac-4035-8737-e17bebecbf03-registry-certificates\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.229485 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/19ce6df2-ffac-4035-8737-e17bebecbf03-registry-tls\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.229523 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/19ce6df2-ffac-4035-8737-e17bebecbf03-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.229564 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/19ce6df2-ffac-4035-8737-e17bebecbf03-trusted-ca\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.229586 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7mx7\" (UniqueName: \"kubernetes.io/projected/19ce6df2-ffac-4035-8737-e17bebecbf03-kube-api-access-w7mx7\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.229646 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/19ce6df2-ffac-4035-8737-e17bebecbf03-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.229671 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/19ce6df2-ffac-4035-8737-e17bebecbf03-bound-sa-token\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.230567 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/19ce6df2-ffac-4035-8737-e17bebecbf03-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.230715 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/19ce6df2-ffac-4035-8737-e17bebecbf03-trusted-ca\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.230904 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/19ce6df2-ffac-4035-8737-e17bebecbf03-registry-certificates\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.240832 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/19ce6df2-ffac-4035-8737-e17bebecbf03-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.245458 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/19ce6df2-ffac-4035-8737-e17bebecbf03-registry-tls\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.257482 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/19ce6df2-ffac-4035-8737-e17bebecbf03-bound-sa-token\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.259818 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7mx7\" (UniqueName: \"kubernetes.io/projected/19ce6df2-ffac-4035-8737-e17bebecbf03-kube-api-access-w7mx7\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.293205 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.508112 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nzdms"] Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.745143 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbwzh" event={"ID":"9969706e-304c-490a-b15d-7d0bfc99261c","Type":"ContainerStarted","Data":"e64acd0481969dd97f8f6ecb1ab6976f73e44f1ae7f1c189557824f80b337968"} Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.753072 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" event={"ID":"19ce6df2-ffac-4035-8737-e17bebecbf03","Type":"ContainerStarted","Data":"49106198fd0e2923d4960595d5c3f7760e4cb0aa2f4b6d1c7ec4eec257c6e80e"} Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.753101 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" event={"ID":"19ce6df2-ffac-4035-8737-e17bebecbf03","Type":"ContainerStarted","Data":"878bbedeeb19eb69ed9665aa9d457705f19cb2abf881f6c7940046f5bd4b5f98"} Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.753375 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.788819 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" podStartSLOduration=1.788803718 podStartE2EDuration="1.788803718s" podCreationTimestamp="2026-02-02 06:52:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:52:43.787127367 +0000 UTC m=+389.164395299" watchObservedRunningTime="2026-02-02 06:52:43.788803718 +0000 UTC m=+389.166071630" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.790103 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cbwzh" podStartSLOduration=2.163152894 podStartE2EDuration="4.79009905s" podCreationTimestamp="2026-02-02 06:52:39 +0000 UTC" firstStartedPulling="2026-02-02 06:52:40.699745481 +0000 UTC m=+386.077013393" lastFinishedPulling="2026-02-02 06:52:43.326691627 +0000 UTC m=+388.703959549" observedRunningTime="2026-02-02 06:52:43.771604143 +0000 UTC m=+389.148872075" watchObservedRunningTime="2026-02-02 06:52:43.79009905 +0000 UTC m=+389.167366962" Feb 02 06:52:47 crc kubenswrapper[4842]: I0202 06:52:47.246358 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sw8ll" Feb 02 06:52:47 crc kubenswrapper[4842]: I0202 06:52:47.247044 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sw8ll" Feb 02 06:52:47 crc kubenswrapper[4842]: I0202 06:52:47.320731 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sw8ll" Feb 02 06:52:47 crc kubenswrapper[4842]: I0202 06:52:47.432169 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l6tg7" Feb 02 06:52:47 crc kubenswrapper[4842]: I0202 06:52:47.432268 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l6tg7" Feb 02 06:52:47 crc kubenswrapper[4842]: I0202 06:52:47.841682 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sw8ll" Feb 02 06:52:48 crc kubenswrapper[4842]: I0202 06:52:48.501111 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l6tg7" podUID="23620448-86fc-4fa7-9295-d9ce6de9b8e6" containerName="registry-server" probeResult="failure" output=< Feb 02 06:52:48 crc kubenswrapper[4842]: timeout: failed to connect service ":50051" within 1s Feb 02 06:52:48 crc kubenswrapper[4842]: > Feb 02 06:52:49 crc kubenswrapper[4842]: I0202 06:52:49.620205 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 06:52:49 crc kubenswrapper[4842]: I0202 06:52:49.620306 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 06:52:49 crc kubenswrapper[4842]: I0202 06:52:49.684020 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 06:52:49 crc kubenswrapper[4842]: I0202 06:52:49.823938 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7hg8l" Feb 02 06:52:49 crc kubenswrapper[4842]: I0202 06:52:49.824023 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7hg8l" Feb 02 06:52:49 crc kubenswrapper[4842]: I0202 06:52:49.862996 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 06:52:49 crc kubenswrapper[4842]: I0202 06:52:49.913678 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7hg8l" Feb 02 06:52:50 crc kubenswrapper[4842]: I0202 06:52:50.856554 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7hg8l" Feb 02 06:52:57 crc kubenswrapper[4842]: I0202 06:52:57.470822 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l6tg7" Feb 02 06:52:57 crc kubenswrapper[4842]: I0202 06:52:57.520434 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l6tg7" Feb 02 06:53:03 crc kubenswrapper[4842]: I0202 06:53:03.301706 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:53:03 crc kubenswrapper[4842]: I0202 06:53:03.402814 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fz9q2"] Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.462700 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" podUID="b76f3bc4-4824-422b-a14a-e7cd193ed30d" containerName="registry" containerID="cri-o://c0f1dc5f34d1f80386e6fdb357944d83aa2b47bec8fd128a2011aa5bc422e3b4" gracePeriod=30 Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.863308 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.933062 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b76f3bc4-4824-422b-a14a-e7cd193ed30d-registry-certificates\") pod \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.933136 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b76f3bc4-4824-422b-a14a-e7cd193ed30d-installation-pull-secrets\") pod \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.933184 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-registry-tls\") pod \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.933246 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-bound-sa-token\") pod \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.933290 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjbqr\" (UniqueName: \"kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-kube-api-access-tjbqr\") pod \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.933549 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.933601 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b76f3bc4-4824-422b-a14a-e7cd193ed30d-trusted-ca\") pod \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.933645 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b76f3bc4-4824-422b-a14a-e7cd193ed30d-ca-trust-extracted\") pod \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.933975 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b76f3bc4-4824-422b-a14a-e7cd193ed30d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "b76f3bc4-4824-422b-a14a-e7cd193ed30d" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.934530 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b76f3bc4-4824-422b-a14a-e7cd193ed30d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "b76f3bc4-4824-422b-a14a-e7cd193ed30d" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.943550 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "b76f3bc4-4824-422b-a14a-e7cd193ed30d" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.944469 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-kube-api-access-tjbqr" (OuterVolumeSpecName: "kube-api-access-tjbqr") pod "b76f3bc4-4824-422b-a14a-e7cd193ed30d" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d"). InnerVolumeSpecName "kube-api-access-tjbqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.947500 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "b76f3bc4-4824-422b-a14a-e7cd193ed30d" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.949755 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b76f3bc4-4824-422b-a14a-e7cd193ed30d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "b76f3bc4-4824-422b-a14a-e7cd193ed30d" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.950116 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "b76f3bc4-4824-422b-a14a-e7cd193ed30d" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.958388 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b76f3bc4-4824-422b-a14a-e7cd193ed30d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "b76f3bc4-4824-422b-a14a-e7cd193ed30d" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.035135 4842 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b76f3bc4-4824-422b-a14a-e7cd193ed30d-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.035188 4842 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b76f3bc4-4824-422b-a14a-e7cd193ed30d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.035207 4842 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b76f3bc4-4824-422b-a14a-e7cd193ed30d-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.035262 4842 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b76f3bc4-4824-422b-a14a-e7cd193ed30d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.035276 4842 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.035290 4842 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.035303 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjbqr\" (UniqueName: \"kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-kube-api-access-tjbqr\") on node \"crc\" DevicePath \"\"" Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.066386 4842 generic.go:334] "Generic (PLEG): container finished" podID="b76f3bc4-4824-422b-a14a-e7cd193ed30d" containerID="c0f1dc5f34d1f80386e6fdb357944d83aa2b47bec8fd128a2011aa5bc422e3b4" exitCode=0 Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.066468 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" event={"ID":"b76f3bc4-4824-422b-a14a-e7cd193ed30d","Type":"ContainerDied","Data":"c0f1dc5f34d1f80386e6fdb357944d83aa2b47bec8fd128a2011aa5bc422e3b4"} Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.066486 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.066534 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" event={"ID":"b76f3bc4-4824-422b-a14a-e7cd193ed30d","Type":"ContainerDied","Data":"abf58a7559b9cdd76c76ebedd2333919bb6bc99060b8c1cfc73575fcdd484652"} Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.066573 4842 scope.go:117] "RemoveContainer" containerID="c0f1dc5f34d1f80386e6fdb357944d83aa2b47bec8fd128a2011aa5bc422e3b4" Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.101908 4842 scope.go:117] "RemoveContainer" containerID="c0f1dc5f34d1f80386e6fdb357944d83aa2b47bec8fd128a2011aa5bc422e3b4" Feb 02 06:53:29 crc kubenswrapper[4842]: E0202 06:53:29.103050 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0f1dc5f34d1f80386e6fdb357944d83aa2b47bec8fd128a2011aa5bc422e3b4\": container with ID starting with c0f1dc5f34d1f80386e6fdb357944d83aa2b47bec8fd128a2011aa5bc422e3b4 not found: ID does not exist" containerID="c0f1dc5f34d1f80386e6fdb357944d83aa2b47bec8fd128a2011aa5bc422e3b4" Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.103102 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0f1dc5f34d1f80386e6fdb357944d83aa2b47bec8fd128a2011aa5bc422e3b4"} err="failed to get container status \"c0f1dc5f34d1f80386e6fdb357944d83aa2b47bec8fd128a2011aa5bc422e3b4\": rpc error: code = NotFound desc = could not find container \"c0f1dc5f34d1f80386e6fdb357944d83aa2b47bec8fd128a2011aa5bc422e3b4\": container with ID starting with c0f1dc5f34d1f80386e6fdb357944d83aa2b47bec8fd128a2011aa5bc422e3b4 not found: ID does not exist" Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.120798 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fz9q2"] Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.124791 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fz9q2"] Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.445418 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b76f3bc4-4824-422b-a14a-e7cd193ed30d" path="/var/lib/kubelet/pods/b76f3bc4-4824-422b-a14a-e7cd193ed30d/volumes" Feb 02 06:54:42 crc kubenswrapper[4842]: I0202 06:54:42.146047 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 06:54:42 crc kubenswrapper[4842]: I0202 06:54:42.147059 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 06:55:12 crc kubenswrapper[4842]: I0202 06:55:12.146094 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 06:55:12 crc kubenswrapper[4842]: I0202 06:55:12.146802 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 06:55:42 crc kubenswrapper[4842]: I0202 06:55:42.146371 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 06:55:42 crc kubenswrapper[4842]: I0202 06:55:42.148653 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 06:55:42 crc kubenswrapper[4842]: I0202 06:55:42.148847 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:55:42 crc kubenswrapper[4842]: I0202 06:55:42.149856 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5170675f524a0cbf4768ef91dd8be4f2ac17b44f3012bcf35bd18ead443e0d00"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 06:55:42 crc kubenswrapper[4842]: I0202 06:55:42.150099 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://5170675f524a0cbf4768ef91dd8be4f2ac17b44f3012bcf35bd18ead443e0d00" gracePeriod=600 Feb 02 06:55:42 crc kubenswrapper[4842]: I0202 06:55:42.961416 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="5170675f524a0cbf4768ef91dd8be4f2ac17b44f3012bcf35bd18ead443e0d00" exitCode=0 Feb 02 06:55:42 crc kubenswrapper[4842]: I0202 06:55:42.961495 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"5170675f524a0cbf4768ef91dd8be4f2ac17b44f3012bcf35bd18ead443e0d00"} Feb 02 06:55:42 crc kubenswrapper[4842]: I0202 06:55:42.961909 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"75f797a8d8f9d999a2baca9e47391a8e34aa160a2187acfaf76eee81d7b0ee62"} Feb 02 06:55:42 crc kubenswrapper[4842]: I0202 06:55:42.961945 4842 scope.go:117] "RemoveContainer" containerID="26f863875b25adddb851bd7939cdd2a355f863cc15cc7b84383d70ddfd11cabb" Feb 02 06:56:15 crc kubenswrapper[4842]: I0202 06:56:15.746882 4842 scope.go:117] "RemoveContainer" containerID="55e75296f0e6047802f588fbbf9926e666199b348dea699c186a87607d8698c7" Feb 02 06:57:42 crc kubenswrapper[4842]: I0202 06:57:42.145913 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 06:57:42 crc kubenswrapper[4842]: I0202 06:57:42.146650 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 06:58:12 crc kubenswrapper[4842]: I0202 06:58:12.146605 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 06:58:12 crc kubenswrapper[4842]: I0202 06:58:12.147441 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 06:58:25 crc kubenswrapper[4842]: I0202 06:58:25.649855 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-njnbq"] Feb 02 06:58:25 crc kubenswrapper[4842]: I0202 06:58:25.656316 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovn-controller" containerID="cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5" gracePeriod=30 Feb 02 06:58:25 crc kubenswrapper[4842]: I0202 06:58:25.656477 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="northd" containerID="cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004" gracePeriod=30 Feb 02 06:58:25 crc kubenswrapper[4842]: I0202 06:58:25.656530 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="nbdb" containerID="cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba" gracePeriod=30 Feb 02 06:58:25 crc kubenswrapper[4842]: I0202 06:58:25.656519 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="kube-rbac-proxy-node" containerID="cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32" gracePeriod=30 Feb 02 06:58:25 crc kubenswrapper[4842]: I0202 06:58:25.656513 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="sbdb" containerID="cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d" gracePeriod=30 Feb 02 06:58:25 crc kubenswrapper[4842]: I0202 06:58:25.656559 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovn-acl-logging" containerID="cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33" gracePeriod=30 Feb 02 06:58:25 crc kubenswrapper[4842]: I0202 06:58:25.656600 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4" gracePeriod=30 Feb 02 06:58:25 crc kubenswrapper[4842]: I0202 06:58:25.709945 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovnkube-controller" containerID="cri-o://25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2" gracePeriod=30 Feb 02 06:58:25 crc kubenswrapper[4842]: E0202 06:58:25.965902 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f1e4f7c_d788_428b_bea6_e862234bfc59.slice/crio-conmon-64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f1e4f7c_d788_428b_bea6_e862234bfc59.slice/crio-97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f1e4f7c_d788_428b_bea6_e862234bfc59.slice/crio-6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f1e4f7c_d788_428b_bea6_e862234bfc59.slice/crio-conmon-6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f1e4f7c_d788_428b_bea6_e862234bfc59.slice/crio-conmon-97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d.scope\": RecentStats: unable to find data in memory cache]" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.013588 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovnkube-controller/3.log" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.015428 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovn-acl-logging/0.log" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.015848 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovn-controller/0.log" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.016246 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071282 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n2fbb"] Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.071701 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovnkube-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071713 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovnkube-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.071722 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovnkube-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071728 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovnkube-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.071736 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="kubecfg-setup" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071743 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="kubecfg-setup" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.071751 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071758 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.071766 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovn-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071772 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovn-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.071780 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b76f3bc4-4824-422b-a14a-e7cd193ed30d" containerName="registry" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071786 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="b76f3bc4-4824-422b-a14a-e7cd193ed30d" containerName="registry" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.071795 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovn-acl-logging" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071802 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovn-acl-logging" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.071809 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="kube-rbac-proxy-node" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071815 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="kube-rbac-proxy-node" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.071826 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="nbdb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071832 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="nbdb" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.071841 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="northd" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071847 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="northd" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.071855 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="sbdb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071861 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="sbdb" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.071869 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovnkube-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071875 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovnkube-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071956 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovn-acl-logging" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071966 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="nbdb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071973 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovnkube-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071979 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovnkube-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071985 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071992 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovn-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071999 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="northd" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.072007 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovnkube-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.072013 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovnkube-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.072020 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="kube-rbac-proxy-node" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.072030 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="b76f3bc4-4824-422b-a14a-e7cd193ed30d" containerName="registry" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.072038 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="sbdb" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.072118 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovnkube-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.072125 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovnkube-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.072134 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovnkube-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.072140 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovnkube-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.072293 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovnkube-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.075856 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178440 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-run-ovn-kubernetes\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178511 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-run-netns\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178544 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-systemd-units\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178578 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-env-overrides\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178601 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-var-lib-openvswitch\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178603 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178600 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178625 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-ovn\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178653 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178685 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178686 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-slash\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178725 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-slash" (OuterVolumeSpecName: "host-slash") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178734 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-systemd\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178763 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178795 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovnkube-script-lib\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178912 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-cni-netd\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178960 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovn-node-metrics-cert\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178970 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178996 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-kubelet\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179039 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovnkube-config\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179075 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-etc-openvswitch\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179070 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179111 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-var-lib-cni-networks-ovn-kubernetes\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179158 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-cni-bin\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179191 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-openvswitch\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179279 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-node-log\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179315 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-log-socket\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179391 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdmbp\" (UniqueName: \"kubernetes.io/projected/3f1e4f7c-d788-428b-bea6-e862234bfc59-kube-api-access-qdmbp\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179403 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179460 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179502 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179575 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-log-socket\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179647 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-run-ovn\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179679 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-systemd-units\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179739 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-run-netns\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179777 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179787 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-var-lib-openvswitch\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179845 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179869 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd14c13b-bd70-4e1c-9b22-b181fc32f958-ovn-node-metrics-cert\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179884 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-node-log" (OuterVolumeSpecName: "node-log") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179895 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179915 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-log-socket" (OuterVolumeSpecName: "log-socket") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179940 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd14c13b-bd70-4e1c-9b22-b181fc32f958-ovnkube-config\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179960 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179994 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd14c13b-bd70-4e1c-9b22-b181fc32f958-ovnkube-script-lib\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180136 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-run-openvswitch\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180195 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-run-ovn-kubernetes\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180270 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-run-systemd\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180312 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-node-log\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180350 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd14c13b-bd70-4e1c-9b22-b181fc32f958-env-overrides\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180384 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-cni-netd\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180451 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-etc-openvswitch\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180481 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180514 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-slash\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180552 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-kubelet\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180580 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv8hm\" (UniqueName: \"kubernetes.io/projected/cd14c13b-bd70-4e1c-9b22-b181fc32f958-kube-api-access-lv8hm\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180615 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-cni-bin\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180712 4842 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180733 4842 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180751 4842 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180768 4842 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180786 4842 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180804 4842 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180823 4842 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-slash\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180839 4842 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180856 4842 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180873 4842 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180890 4842 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180907 4842 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180926 4842 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180943 4842 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180961 4842 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180977 4842 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-node-log\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180994 4842 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-log-socket\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.186255 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f1e4f7c-d788-428b-bea6-e862234bfc59-kube-api-access-qdmbp" (OuterVolumeSpecName: "kube-api-access-qdmbp") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "kube-api-access-qdmbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.186732 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.206133 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.238130 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovnkube-controller/3.log" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.240088 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovn-acl-logging/0.log" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.240806 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovn-controller/0.log" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241075 4842 generic.go:334] "Generic (PLEG): container finished" podID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerID="25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2" exitCode=0 Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241101 4842 generic.go:334] "Generic (PLEG): container finished" podID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerID="97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d" exitCode=0 Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241109 4842 generic.go:334] "Generic (PLEG): container finished" podID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerID="64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba" exitCode=0 Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241117 4842 generic.go:334] "Generic (PLEG): container finished" podID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerID="6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004" exitCode=0 Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241124 4842 generic.go:334] "Generic (PLEG): container finished" podID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerID="d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4" exitCode=0 Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241130 4842 generic.go:334] "Generic (PLEG): container finished" podID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerID="78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32" exitCode=0 Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241138 4842 generic.go:334] "Generic (PLEG): container finished" podID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerID="159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33" exitCode=143 Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241146 4842 generic.go:334] "Generic (PLEG): container finished" podID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerID="638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5" exitCode=143 Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241190 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerDied","Data":"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241239 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerDied","Data":"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241257 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerDied","Data":"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241270 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerDied","Data":"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241282 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerDied","Data":"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241294 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerDied","Data":"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241305 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241318 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241326 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241333 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241340 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241347 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241353 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241360 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241355 4842 scope.go:117] "RemoveContainer" containerID="25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241377 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241367 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241526 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerDied","Data":"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241570 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241593 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241605 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241617 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241629 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241641 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241653 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241665 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241677 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241688 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241705 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerDied","Data":"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241721 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241734 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241747 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241757 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241767 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241778 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241788 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241799 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241809 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241820 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241834 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerDied","Data":"ad55e0c8d5649109a4ec1a9a3e073a9a325c6f3565638121dd923673a8430c3b"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241854 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241867 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241877 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241888 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241899 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241910 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241922 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241933 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241944 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241955 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.243841 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gmkx9_c1fd21cd-ea6a-44a0-b136-f338fc97cf18/kube-multus/2.log" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.244740 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gmkx9_c1fd21cd-ea6a-44a0-b136-f338fc97cf18/kube-multus/1.log" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.244802 4842 generic.go:334] "Generic (PLEG): container finished" podID="c1fd21cd-ea6a-44a0-b136-f338fc97cf18" containerID="3b21f8e1a886dde4d1d02d4825a8f34dbf2fb604aa25d226e93ac27f195f2631" exitCode=2 Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.244842 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gmkx9" event={"ID":"c1fd21cd-ea6a-44a0-b136-f338fc97cf18","Type":"ContainerDied","Data":"3b21f8e1a886dde4d1d02d4825a8f34dbf2fb604aa25d226e93ac27f195f2631"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.244881 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb46ef51b68530b7f2b8f5c7e049ebba4820dd4f4f0a8efd0feba8f483ed768d"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.245406 4842 scope.go:117] "RemoveContainer" containerID="3b21f8e1a886dde4d1d02d4825a8f34dbf2fb604aa25d226e93ac27f195f2631" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.245701 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-gmkx9_openshift-multus(c1fd21cd-ea6a-44a0-b136-f338fc97cf18)\"" pod="openshift-multus/multus-gmkx9" podUID="c1fd21cd-ea6a-44a0-b136-f338fc97cf18" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.256827 4842 scope.go:117] "RemoveContainer" containerID="72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.280324 4842 scope.go:117] "RemoveContainer" containerID="97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282440 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-run-ovn-kubernetes\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282465 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-run-systemd\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282486 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-node-log\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282508 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd14c13b-bd70-4e1c-9b22-b181fc32f958-env-overrides\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282523 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-cni-netd\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282546 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-etc-openvswitch\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282563 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282580 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-slash\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282596 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv8hm\" (UniqueName: \"kubernetes.io/projected/cd14c13b-bd70-4e1c-9b22-b181fc32f958-kube-api-access-lv8hm\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282611 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-kubelet\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282607 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-run-systemd\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282652 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-cni-bin\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282619 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-node-log\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282710 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-log-socket\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282704 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-run-ovn-kubernetes\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282752 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-slash\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282673 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-log-socket\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282779 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-etc-openvswitch\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282800 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-run-ovn\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282826 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-systemd-units\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282834 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-kubelet\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282856 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-run-netns\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282886 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-var-lib-openvswitch\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282909 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd14c13b-bd70-4e1c-9b22-b181fc32f958-ovn-node-metrics-cert\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282929 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd14c13b-bd70-4e1c-9b22-b181fc32f958-ovnkube-config\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282951 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-run-ovn\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282982 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd14c13b-bd70-4e1c-9b22-b181fc32f958-ovnkube-script-lib\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282790 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-cni-netd\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282724 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.283059 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-run-netns\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.283060 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-run-openvswitch\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.283090 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-var-lib-openvswitch\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282905 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-systemd-units\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282853 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-cni-bin\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.283013 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-run-openvswitch\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.284247 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd14c13b-bd70-4e1c-9b22-b181fc32f958-ovnkube-config\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.284561 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd14c13b-bd70-4e1c-9b22-b181fc32f958-env-overrides\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.284765 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdmbp\" (UniqueName: \"kubernetes.io/projected/3f1e4f7c-d788-428b-bea6-e862234bfc59-kube-api-access-qdmbp\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.284834 4842 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.284865 4842 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.287140 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd14c13b-bd70-4e1c-9b22-b181fc32f958-ovnkube-script-lib\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.289652 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd14c13b-bd70-4e1c-9b22-b181fc32f958-ovn-node-metrics-cert\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.303905 4842 scope.go:117] "RemoveContainer" containerID="64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.304109 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-njnbq"] Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.309528 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-njnbq"] Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.310569 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv8hm\" (UniqueName: \"kubernetes.io/projected/cd14c13b-bd70-4e1c-9b22-b181fc32f958-kube-api-access-lv8hm\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.319278 4842 scope.go:117] "RemoveContainer" containerID="6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.332554 4842 scope.go:117] "RemoveContainer" containerID="d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.350108 4842 scope.go:117] "RemoveContainer" containerID="78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.366174 4842 scope.go:117] "RemoveContainer" containerID="159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.378875 4842 scope.go:117] "RemoveContainer" containerID="638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.398181 4842 scope.go:117] "RemoveContainer" containerID="8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.410540 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.414628 4842 scope.go:117] "RemoveContainer" containerID="25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.415019 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2\": container with ID starting with 25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2 not found: ID does not exist" containerID="25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.415066 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2"} err="failed to get container status \"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2\": rpc error: code = NotFound desc = could not find container \"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2\": container with ID starting with 25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.415091 4842 scope.go:117] "RemoveContainer" containerID="72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.415332 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\": container with ID starting with 72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716 not found: ID does not exist" containerID="72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.415349 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716"} err="failed to get container status \"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\": rpc error: code = NotFound desc = could not find container \"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\": container with ID starting with 72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.415382 4842 scope.go:117] "RemoveContainer" containerID="97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.415610 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\": container with ID starting with 97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d not found: ID does not exist" containerID="97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.415636 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d"} err="failed to get container status \"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\": rpc error: code = NotFound desc = could not find container \"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\": container with ID starting with 97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.415652 4842 scope.go:117] "RemoveContainer" containerID="64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.415904 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\": container with ID starting with 64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba not found: ID does not exist" containerID="64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.415951 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba"} err="failed to get container status \"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\": rpc error: code = NotFound desc = could not find container \"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\": container with ID starting with 64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.415967 4842 scope.go:117] "RemoveContainer" containerID="6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.416183 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\": container with ID starting with 6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004 not found: ID does not exist" containerID="6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.416203 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004"} err="failed to get container status \"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\": rpc error: code = NotFound desc = could not find container \"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\": container with ID starting with 6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.416236 4842 scope.go:117] "RemoveContainer" containerID="d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.416563 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\": container with ID starting with d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4 not found: ID does not exist" containerID="d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.416628 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4"} err="failed to get container status \"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\": rpc error: code = NotFound desc = could not find container \"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\": container with ID starting with d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.416687 4842 scope.go:117] "RemoveContainer" containerID="78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.416983 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\": container with ID starting with 78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32 not found: ID does not exist" containerID="78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.417003 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32"} err="failed to get container status \"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\": rpc error: code = NotFound desc = could not find container \"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\": container with ID starting with 78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.417017 4842 scope.go:117] "RemoveContainer" containerID="159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.417301 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\": container with ID starting with 159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33 not found: ID does not exist" containerID="159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.417320 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33"} err="failed to get container status \"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\": rpc error: code = NotFound desc = could not find container \"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\": container with ID starting with 159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.417332 4842 scope.go:117] "RemoveContainer" containerID="638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.417512 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\": container with ID starting with 638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5 not found: ID does not exist" containerID="638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.417556 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5"} err="failed to get container status \"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\": rpc error: code = NotFound desc = could not find container \"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\": container with ID starting with 638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.417571 4842 scope.go:117] "RemoveContainer" containerID="8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.417827 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\": container with ID starting with 8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe not found: ID does not exist" containerID="8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.417843 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe"} err="failed to get container status \"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\": rpc error: code = NotFound desc = could not find container \"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\": container with ID starting with 8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.417855 4842 scope.go:117] "RemoveContainer" containerID="25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.418093 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2"} err="failed to get container status \"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2\": rpc error: code = NotFound desc = could not find container \"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2\": container with ID starting with 25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.418126 4842 scope.go:117] "RemoveContainer" containerID="72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.418395 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716"} err="failed to get container status \"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\": rpc error: code = NotFound desc = could not find container \"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\": container with ID starting with 72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.418409 4842 scope.go:117] "RemoveContainer" containerID="97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.418852 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d"} err="failed to get container status \"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\": rpc error: code = NotFound desc = could not find container \"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\": container with ID starting with 97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.418980 4842 scope.go:117] "RemoveContainer" containerID="64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.419500 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba"} err="failed to get container status \"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\": rpc error: code = NotFound desc = could not find container \"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\": container with ID starting with 64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.419589 4842 scope.go:117] "RemoveContainer" containerID="6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.419924 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004"} err="failed to get container status \"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\": rpc error: code = NotFound desc = could not find container \"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\": container with ID starting with 6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.419945 4842 scope.go:117] "RemoveContainer" containerID="d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.420197 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4"} err="failed to get container status \"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\": rpc error: code = NotFound desc = could not find container \"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\": container with ID starting with d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.420247 4842 scope.go:117] "RemoveContainer" containerID="78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.420530 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32"} err="failed to get container status \"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\": rpc error: code = NotFound desc = could not find container \"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\": container with ID starting with 78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.420550 4842 scope.go:117] "RemoveContainer" containerID="159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.420811 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33"} err="failed to get container status \"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\": rpc error: code = NotFound desc = could not find container \"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\": container with ID starting with 159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.420864 4842 scope.go:117] "RemoveContainer" containerID="638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.421135 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5"} err="failed to get container status \"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\": rpc error: code = NotFound desc = could not find container \"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\": container with ID starting with 638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.421157 4842 scope.go:117] "RemoveContainer" containerID="8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.421453 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe"} err="failed to get container status \"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\": rpc error: code = NotFound desc = could not find container \"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\": container with ID starting with 8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.421507 4842 scope.go:117] "RemoveContainer" containerID="25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.421803 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2"} err="failed to get container status \"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2\": rpc error: code = NotFound desc = could not find container \"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2\": container with ID starting with 25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.422332 4842 scope.go:117] "RemoveContainer" containerID="72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.423057 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716"} err="failed to get container status \"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\": rpc error: code = NotFound desc = could not find container \"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\": container with ID starting with 72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.423082 4842 scope.go:117] "RemoveContainer" containerID="97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.423674 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d"} err="failed to get container status \"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\": rpc error: code = NotFound desc = could not find container \"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\": container with ID starting with 97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.423722 4842 scope.go:117] "RemoveContainer" containerID="64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.424076 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba"} err="failed to get container status \"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\": rpc error: code = NotFound desc = could not find container \"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\": container with ID starting with 64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.424125 4842 scope.go:117] "RemoveContainer" containerID="6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.424494 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004"} err="failed to get container status \"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\": rpc error: code = NotFound desc = could not find container \"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\": container with ID starting with 6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.424537 4842 scope.go:117] "RemoveContainer" containerID="d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.424922 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4"} err="failed to get container status \"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\": rpc error: code = NotFound desc = could not find container \"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\": container with ID starting with d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.424976 4842 scope.go:117] "RemoveContainer" containerID="78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.425300 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32"} err="failed to get container status \"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\": rpc error: code = NotFound desc = could not find container \"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\": container with ID starting with 78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.425343 4842 scope.go:117] "RemoveContainer" containerID="159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.425642 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33"} err="failed to get container status \"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\": rpc error: code = NotFound desc = could not find container \"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\": container with ID starting with 159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.425667 4842 scope.go:117] "RemoveContainer" containerID="638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.425979 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5"} err="failed to get container status \"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\": rpc error: code = NotFound desc = could not find container \"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\": container with ID starting with 638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.426022 4842 scope.go:117] "RemoveContainer" containerID="8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.426468 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe"} err="failed to get container status \"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\": rpc error: code = NotFound desc = could not find container \"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\": container with ID starting with 8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.426518 4842 scope.go:117] "RemoveContainer" containerID="25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.426908 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2"} err="failed to get container status \"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2\": rpc error: code = NotFound desc = could not find container \"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2\": container with ID starting with 25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.426937 4842 scope.go:117] "RemoveContainer" containerID="72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.427248 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716"} err="failed to get container status \"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\": rpc error: code = NotFound desc = could not find container \"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\": container with ID starting with 72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.427286 4842 scope.go:117] "RemoveContainer" containerID="97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.427661 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d"} err="failed to get container status \"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\": rpc error: code = NotFound desc = could not find container \"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\": container with ID starting with 97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.427690 4842 scope.go:117] "RemoveContainer" containerID="64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.427988 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba"} err="failed to get container status \"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\": rpc error: code = NotFound desc = could not find container \"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\": container with ID starting with 64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.428019 4842 scope.go:117] "RemoveContainer" containerID="6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.428383 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004"} err="failed to get container status \"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\": rpc error: code = NotFound desc = could not find container \"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\": container with ID starting with 6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.428410 4842 scope.go:117] "RemoveContainer" containerID="d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.428710 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4"} err="failed to get container status \"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\": rpc error: code = NotFound desc = could not find container \"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\": container with ID starting with d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.428762 4842 scope.go:117] "RemoveContainer" containerID="78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.429167 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32"} err="failed to get container status \"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\": rpc error: code = NotFound desc = could not find container \"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\": container with ID starting with 78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.429195 4842 scope.go:117] "RemoveContainer" containerID="159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.429571 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33"} err="failed to get container status \"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\": rpc error: code = NotFound desc = could not find container \"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\": container with ID starting with 159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.429600 4842 scope.go:117] "RemoveContainer" containerID="638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.429890 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5"} err="failed to get container status \"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\": rpc error: code = NotFound desc = could not find container \"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\": container with ID starting with 638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.429942 4842 scope.go:117] "RemoveContainer" containerID="8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.430247 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe"} err="failed to get container status \"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\": rpc error: code = NotFound desc = could not find container \"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\": container with ID starting with 8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.430276 4842 scope.go:117] "RemoveContainer" containerID="25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.430547 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2"} err="failed to get container status \"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2\": rpc error: code = NotFound desc = could not find container \"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2\": container with ID starting with 25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2 not found: ID does not exist" Feb 02 06:58:27 crc kubenswrapper[4842]: I0202 06:58:27.256903 4842 generic.go:334] "Generic (PLEG): container finished" podID="cd14c13b-bd70-4e1c-9b22-b181fc32f958" containerID="c773a8e662798b3ea6b5b7e12e5e91862c21ba2f37849b77e63eb9b0e601fc93" exitCode=0 Feb 02 06:58:27 crc kubenswrapper[4842]: I0202 06:58:27.257242 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" event={"ID":"cd14c13b-bd70-4e1c-9b22-b181fc32f958","Type":"ContainerDied","Data":"c773a8e662798b3ea6b5b7e12e5e91862c21ba2f37849b77e63eb9b0e601fc93"} Feb 02 06:58:27 crc kubenswrapper[4842]: I0202 06:58:27.257277 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" event={"ID":"cd14c13b-bd70-4e1c-9b22-b181fc32f958","Type":"ContainerStarted","Data":"b2ccbbf96e82939af0ad2939dbd92ab1daed6a0a27472456b13a227a47610578"} Feb 02 06:58:27 crc kubenswrapper[4842]: I0202 06:58:27.446403 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" path="/var/lib/kubelet/pods/3f1e4f7c-d788-428b-bea6-e862234bfc59/volumes" Feb 02 06:58:28 crc kubenswrapper[4842]: I0202 06:58:28.265772 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" event={"ID":"cd14c13b-bd70-4e1c-9b22-b181fc32f958","Type":"ContainerStarted","Data":"fcc1237e67d31d2a48b1c31a500e518e9ea752835aeaa32be6a318e2a8f64fe8"} Feb 02 06:58:28 crc kubenswrapper[4842]: I0202 06:58:28.266508 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" event={"ID":"cd14c13b-bd70-4e1c-9b22-b181fc32f958","Type":"ContainerStarted","Data":"292f77e1b6a81524ec24767c131d7b36e7b618b95af6997d52309613d974b917"} Feb 02 06:58:28 crc kubenswrapper[4842]: I0202 06:58:28.266531 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" event={"ID":"cd14c13b-bd70-4e1c-9b22-b181fc32f958","Type":"ContainerStarted","Data":"db105e7a453b1010c59b8183ce87644a8c934f06da215abb1acc6fdcb057dc4c"} Feb 02 06:58:28 crc kubenswrapper[4842]: I0202 06:58:28.266548 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" event={"ID":"cd14c13b-bd70-4e1c-9b22-b181fc32f958","Type":"ContainerStarted","Data":"730360a7a296c9054b758ed0472b2f5a9b8a1c6e91ee584109142845e2816172"} Feb 02 06:58:28 crc kubenswrapper[4842]: I0202 06:58:28.266564 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" event={"ID":"cd14c13b-bd70-4e1c-9b22-b181fc32f958","Type":"ContainerStarted","Data":"76748dc8f7ea234e2d4887b8c145a00bbd46d074c24a344c9d8386dcdfa75e07"} Feb 02 06:58:28 crc kubenswrapper[4842]: I0202 06:58:28.266579 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" event={"ID":"cd14c13b-bd70-4e1c-9b22-b181fc32f958","Type":"ContainerStarted","Data":"cb3364c588faa6b89f9248648ec4d99b1eaf155f60fe973ad1e53b3982551ae8"} Feb 02 06:58:31 crc kubenswrapper[4842]: I0202 06:58:31.295531 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" event={"ID":"cd14c13b-bd70-4e1c-9b22-b181fc32f958","Type":"ContainerStarted","Data":"392d61047affe59f5b83792c432c0d27e89d3a65324ff2350dc1bc801b09d3d0"} Feb 02 06:58:33 crc kubenswrapper[4842]: I0202 06:58:33.310402 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" event={"ID":"cd14c13b-bd70-4e1c-9b22-b181fc32f958","Type":"ContainerStarted","Data":"0a1f22c99e002de5246d42c7817684e4759b0d3b29d6dc85e9de02c0556faa61"} Feb 02 06:58:33 crc kubenswrapper[4842]: I0202 06:58:33.310786 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:33 crc kubenswrapper[4842]: I0202 06:58:33.310812 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:33 crc kubenswrapper[4842]: I0202 06:58:33.311956 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:33 crc kubenswrapper[4842]: I0202 06:58:33.344357 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" podStartSLOduration=7.344336872 podStartE2EDuration="7.344336872s" podCreationTimestamp="2026-02-02 06:58:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:58:33.342805774 +0000 UTC m=+738.720073696" watchObservedRunningTime="2026-02-02 06:58:33.344336872 +0000 UTC m=+738.721604814" Feb 02 06:58:33 crc kubenswrapper[4842]: I0202 06:58:33.344657 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:33 crc kubenswrapper[4842]: I0202 06:58:33.353586 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:34 crc kubenswrapper[4842]: I0202 06:58:34.970609 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-q54vf"] Feb 02 06:58:34 crc kubenswrapper[4842]: I0202 06:58:34.972307 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:34 crc kubenswrapper[4842]: I0202 06:58:34.976268 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Feb 02 06:58:34 crc kubenswrapper[4842]: I0202 06:58:34.976598 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Feb 02 06:58:34 crc kubenswrapper[4842]: I0202 06:58:34.977634 4842 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-9bxn5" Feb 02 06:58:34 crc kubenswrapper[4842]: I0202 06:58:34.977891 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Feb 02 06:58:34 crc kubenswrapper[4842]: I0202 06:58:34.981395 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-q54vf"] Feb 02 06:58:35 crc kubenswrapper[4842]: I0202 06:58:35.106671 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqq7z\" (UniqueName: \"kubernetes.io/projected/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-kube-api-access-dqq7z\") pod \"crc-storage-crc-q54vf\" (UID: \"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20\") " pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:35 crc kubenswrapper[4842]: I0202 06:58:35.106880 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-crc-storage\") pod \"crc-storage-crc-q54vf\" (UID: \"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20\") " pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:35 crc kubenswrapper[4842]: I0202 06:58:35.107035 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-node-mnt\") pod \"crc-storage-crc-q54vf\" (UID: \"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20\") " pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:35 crc kubenswrapper[4842]: I0202 06:58:35.208506 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqq7z\" (UniqueName: \"kubernetes.io/projected/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-kube-api-access-dqq7z\") pod \"crc-storage-crc-q54vf\" (UID: \"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20\") " pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:35 crc kubenswrapper[4842]: I0202 06:58:35.208591 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-crc-storage\") pod \"crc-storage-crc-q54vf\" (UID: \"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20\") " pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:35 crc kubenswrapper[4842]: I0202 06:58:35.208660 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-node-mnt\") pod \"crc-storage-crc-q54vf\" (UID: \"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20\") " pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:35 crc kubenswrapper[4842]: I0202 06:58:35.209009 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-node-mnt\") pod \"crc-storage-crc-q54vf\" (UID: \"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20\") " pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:35 crc kubenswrapper[4842]: I0202 06:58:35.210413 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-crc-storage\") pod \"crc-storage-crc-q54vf\" (UID: \"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20\") " pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:35 crc kubenswrapper[4842]: I0202 06:58:35.241599 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqq7z\" (UniqueName: \"kubernetes.io/projected/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-kube-api-access-dqq7z\") pod \"crc-storage-crc-q54vf\" (UID: \"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20\") " pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:35 crc kubenswrapper[4842]: I0202 06:58:35.306833 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:35 crc kubenswrapper[4842]: E0202 06:58:35.342189 4842 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-q54vf_crc-storage_d49ae49a-4fb5-4d9c-894e-6a743cbe9c20_0(4e9a20a6bc189d79b17a08d42502d97158f9b3c455bff46fe72f1e01acbb1591): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 06:58:35 crc kubenswrapper[4842]: E0202 06:58:35.342352 4842 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-q54vf_crc-storage_d49ae49a-4fb5-4d9c-894e-6a743cbe9c20_0(4e9a20a6bc189d79b17a08d42502d97158f9b3c455bff46fe72f1e01acbb1591): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:35 crc kubenswrapper[4842]: E0202 06:58:35.342381 4842 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-q54vf_crc-storage_d49ae49a-4fb5-4d9c-894e-6a743cbe9c20_0(4e9a20a6bc189d79b17a08d42502d97158f9b3c455bff46fe72f1e01acbb1591): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:35 crc kubenswrapper[4842]: E0202 06:58:35.342428 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-q54vf_crc-storage(d49ae49a-4fb5-4d9c-894e-6a743cbe9c20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-q54vf_crc-storage(d49ae49a-4fb5-4d9c-894e-6a743cbe9c20)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-q54vf_crc-storage_d49ae49a-4fb5-4d9c-894e-6a743cbe9c20_0(4e9a20a6bc189d79b17a08d42502d97158f9b3c455bff46fe72f1e01acbb1591): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-q54vf" podUID="d49ae49a-4fb5-4d9c-894e-6a743cbe9c20" Feb 02 06:58:36 crc kubenswrapper[4842]: I0202 06:58:36.326505 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:36 crc kubenswrapper[4842]: I0202 06:58:36.327641 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:36 crc kubenswrapper[4842]: E0202 06:58:36.368733 4842 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-q54vf_crc-storage_d49ae49a-4fb5-4d9c-894e-6a743cbe9c20_0(831f10998e29558bb665b7c10e58517e59b53bfa8db13d517b90ad51fcfcc29d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 06:58:36 crc kubenswrapper[4842]: E0202 06:58:36.368828 4842 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-q54vf_crc-storage_d49ae49a-4fb5-4d9c-894e-6a743cbe9c20_0(831f10998e29558bb665b7c10e58517e59b53bfa8db13d517b90ad51fcfcc29d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:36 crc kubenswrapper[4842]: E0202 06:58:36.368863 4842 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-q54vf_crc-storage_d49ae49a-4fb5-4d9c-894e-6a743cbe9c20_0(831f10998e29558bb665b7c10e58517e59b53bfa8db13d517b90ad51fcfcc29d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:36 crc kubenswrapper[4842]: E0202 06:58:36.368938 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-q54vf_crc-storage(d49ae49a-4fb5-4d9c-894e-6a743cbe9c20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-q54vf_crc-storage(d49ae49a-4fb5-4d9c-894e-6a743cbe9c20)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-q54vf_crc-storage_d49ae49a-4fb5-4d9c-894e-6a743cbe9c20_0(831f10998e29558bb665b7c10e58517e59b53bfa8db13d517b90ad51fcfcc29d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-q54vf" podUID="d49ae49a-4fb5-4d9c-894e-6a743cbe9c20" Feb 02 06:58:38 crc kubenswrapper[4842]: I0202 06:58:38.434148 4842 scope.go:117] "RemoveContainer" containerID="3b21f8e1a886dde4d1d02d4825a8f34dbf2fb604aa25d226e93ac27f195f2631" Feb 02 06:58:39 crc kubenswrapper[4842]: I0202 06:58:39.348465 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gmkx9_c1fd21cd-ea6a-44a0-b136-f338fc97cf18/kube-multus/2.log" Feb 02 06:58:39 crc kubenswrapper[4842]: I0202 06:58:39.349733 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gmkx9_c1fd21cd-ea6a-44a0-b136-f338fc97cf18/kube-multus/1.log" Feb 02 06:58:39 crc kubenswrapper[4842]: I0202 06:58:39.349938 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gmkx9" event={"ID":"c1fd21cd-ea6a-44a0-b136-f338fc97cf18","Type":"ContainerStarted","Data":"e4c8473c86d301bda5245277ad649c0655932872ce690973718b44fcdded7794"} Feb 02 06:58:42 crc kubenswrapper[4842]: I0202 06:58:42.146103 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 06:58:42 crc kubenswrapper[4842]: I0202 06:58:42.146183 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 06:58:42 crc kubenswrapper[4842]: I0202 06:58:42.146283 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:58:42 crc kubenswrapper[4842]: I0202 06:58:42.147051 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"75f797a8d8f9d999a2baca9e47391a8e34aa160a2187acfaf76eee81d7b0ee62"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 06:58:42 crc kubenswrapper[4842]: I0202 06:58:42.147143 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://75f797a8d8f9d999a2baca9e47391a8e34aa160a2187acfaf76eee81d7b0ee62" gracePeriod=600 Feb 02 06:58:42 crc kubenswrapper[4842]: I0202 06:58:42.387318 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="75f797a8d8f9d999a2baca9e47391a8e34aa160a2187acfaf76eee81d7b0ee62" exitCode=0 Feb 02 06:58:42 crc kubenswrapper[4842]: I0202 06:58:42.387416 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"75f797a8d8f9d999a2baca9e47391a8e34aa160a2187acfaf76eee81d7b0ee62"} Feb 02 06:58:42 crc kubenswrapper[4842]: I0202 06:58:42.387833 4842 scope.go:117] "RemoveContainer" containerID="5170675f524a0cbf4768ef91dd8be4f2ac17b44f3012bcf35bd18ead443e0d00" Feb 02 06:58:43 crc kubenswrapper[4842]: I0202 06:58:43.398204 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"409dfa164f76008135fd93bb209c464e3603214d524a9798b15a0c8226203f93"} Feb 02 06:58:50 crc kubenswrapper[4842]: I0202 06:58:50.433431 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:50 crc kubenswrapper[4842]: I0202 06:58:50.434716 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:50 crc kubenswrapper[4842]: I0202 06:58:50.918187 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-q54vf"] Feb 02 06:58:50 crc kubenswrapper[4842]: W0202 06:58:50.939143 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd49ae49a_4fb5_4d9c_894e_6a743cbe9c20.slice/crio-c31e0eb6fcae043290bc03c7f171c32ecec74bc1379d7d706c362fc7dc6bfe78 WatchSource:0}: Error finding container c31e0eb6fcae043290bc03c7f171c32ecec74bc1379d7d706c362fc7dc6bfe78: Status 404 returned error can't find the container with id c31e0eb6fcae043290bc03c7f171c32ecec74bc1379d7d706c362fc7dc6bfe78 Feb 02 06:58:50 crc kubenswrapper[4842]: I0202 06:58:50.943286 4842 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 06:58:51 crc kubenswrapper[4842]: I0202 06:58:51.460813 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-q54vf" event={"ID":"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20","Type":"ContainerStarted","Data":"c31e0eb6fcae043290bc03c7f171c32ecec74bc1379d7d706c362fc7dc6bfe78"} Feb 02 06:58:52 crc kubenswrapper[4842]: I0202 06:58:52.470056 4842 generic.go:334] "Generic (PLEG): container finished" podID="d49ae49a-4fb5-4d9c-894e-6a743cbe9c20" containerID="a4b70fd2cb99fa10540e4dadeda4038897a65ae03e5544ded9e1704361291cbe" exitCode=0 Feb 02 06:58:52 crc kubenswrapper[4842]: I0202 06:58:52.470189 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-q54vf" event={"ID":"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20","Type":"ContainerDied","Data":"a4b70fd2cb99fa10540e4dadeda4038897a65ae03e5544ded9e1704361291cbe"} Feb 02 06:58:53 crc kubenswrapper[4842]: I0202 06:58:53.768485 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:53 crc kubenswrapper[4842]: I0202 06:58:53.858072 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqq7z\" (UniqueName: \"kubernetes.io/projected/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-kube-api-access-dqq7z\") pod \"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20\" (UID: \"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20\") " Feb 02 06:58:53 crc kubenswrapper[4842]: I0202 06:58:53.859891 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-node-mnt\") pod \"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20\" (UID: \"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20\") " Feb 02 06:58:53 crc kubenswrapper[4842]: I0202 06:58:53.859984 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-crc-storage\") pod \"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20\" (UID: \"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20\") " Feb 02 06:58:53 crc kubenswrapper[4842]: I0202 06:58:53.860469 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "d49ae49a-4fb5-4d9c-894e-6a743cbe9c20" (UID: "d49ae49a-4fb5-4d9c-894e-6a743cbe9c20"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:58:53 crc kubenswrapper[4842]: I0202 06:58:53.865892 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-kube-api-access-dqq7z" (OuterVolumeSpecName: "kube-api-access-dqq7z") pod "d49ae49a-4fb5-4d9c-894e-6a743cbe9c20" (UID: "d49ae49a-4fb5-4d9c-894e-6a743cbe9c20"). InnerVolumeSpecName "kube-api-access-dqq7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:58:53 crc kubenswrapper[4842]: I0202 06:58:53.881707 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "d49ae49a-4fb5-4d9c-894e-6a743cbe9c20" (UID: "d49ae49a-4fb5-4d9c-894e-6a743cbe9c20"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:58:53 crc kubenswrapper[4842]: I0202 06:58:53.962414 4842 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-crc-storage\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:53 crc kubenswrapper[4842]: I0202 06:58:53.962481 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqq7z\" (UniqueName: \"kubernetes.io/projected/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-kube-api-access-dqq7z\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:53 crc kubenswrapper[4842]: I0202 06:58:53.962511 4842 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-node-mnt\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:54 crc kubenswrapper[4842]: I0202 06:58:54.486300 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-q54vf" event={"ID":"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20","Type":"ContainerDied","Data":"c31e0eb6fcae043290bc03c7f171c32ecec74bc1379d7d706c362fc7dc6bfe78"} Feb 02 06:58:54 crc kubenswrapper[4842]: I0202 06:58:54.486357 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c31e0eb6fcae043290bc03c7f171c32ecec74bc1379d7d706c362fc7dc6bfe78" Feb 02 06:58:54 crc kubenswrapper[4842]: I0202 06:58:54.486385 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:56 crc kubenswrapper[4842]: I0202 06:58:56.490253 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:57 crc kubenswrapper[4842]: I0202 06:58:57.334974 4842 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.350161 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n"] Feb 02 06:59:01 crc kubenswrapper[4842]: E0202 06:59:01.350836 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49ae49a-4fb5-4d9c-894e-6a743cbe9c20" containerName="storage" Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.350857 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49ae49a-4fb5-4d9c-894e-6a743cbe9c20" containerName="storage" Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.351023 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49ae49a-4fb5-4d9c-894e-6a743cbe9c20" containerName="storage" Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.352100 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.357638 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.365958 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n"] Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.370971 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n\" (UID: \"7e244b75-9c3a-4f20-9bd7-071fb2cc7883\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.371044 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5c4h\" (UniqueName: \"kubernetes.io/projected/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-kube-api-access-d5c4h\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n\" (UID: \"7e244b75-9c3a-4f20-9bd7-071fb2cc7883\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.371115 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n\" (UID: \"7e244b75-9c3a-4f20-9bd7-071fb2cc7883\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.473113 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n\" (UID: \"7e244b75-9c3a-4f20-9bd7-071fb2cc7883\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.473601 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5c4h\" (UniqueName: \"kubernetes.io/projected/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-kube-api-access-d5c4h\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n\" (UID: \"7e244b75-9c3a-4f20-9bd7-071fb2cc7883\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.473691 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n\" (UID: \"7e244b75-9c3a-4f20-9bd7-071fb2cc7883\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.473726 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n\" (UID: \"7e244b75-9c3a-4f20-9bd7-071fb2cc7883\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.474449 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n\" (UID: \"7e244b75-9c3a-4f20-9bd7-071fb2cc7883\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.503590 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5c4h\" (UniqueName: \"kubernetes.io/projected/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-kube-api-access-d5c4h\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n\" (UID: \"7e244b75-9c3a-4f20-9bd7-071fb2cc7883\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.668122 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.941286 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n"] Feb 02 06:59:01 crc kubenswrapper[4842]: W0202 06:59:01.945606 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e244b75_9c3a_4f20_9bd7_071fb2cc7883.slice/crio-d39e6b033f2cba8bf62594e0e22c48d6b1f154a990c26a31fb1ba280cdffca7c WatchSource:0}: Error finding container d39e6b033f2cba8bf62594e0e22c48d6b1f154a990c26a31fb1ba280cdffca7c: Status 404 returned error can't find the container with id d39e6b033f2cba8bf62594e0e22c48d6b1f154a990c26a31fb1ba280cdffca7c Feb 02 06:59:02 crc kubenswrapper[4842]: I0202 06:59:02.536902 4842 generic.go:334] "Generic (PLEG): container finished" podID="7e244b75-9c3a-4f20-9bd7-071fb2cc7883" containerID="f1b8819b4d1cd17b3c3b1714c1d0379f57ed6dce58950e4412eb686b40f4f5a8" exitCode=0 Feb 02 06:59:02 crc kubenswrapper[4842]: I0202 06:59:02.537710 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" event={"ID":"7e244b75-9c3a-4f20-9bd7-071fb2cc7883","Type":"ContainerDied","Data":"f1b8819b4d1cd17b3c3b1714c1d0379f57ed6dce58950e4412eb686b40f4f5a8"} Feb 02 06:59:02 crc kubenswrapper[4842]: I0202 06:59:02.538859 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" event={"ID":"7e244b75-9c3a-4f20-9bd7-071fb2cc7883","Type":"ContainerStarted","Data":"d39e6b033f2cba8bf62594e0e22c48d6b1f154a990c26a31fb1ba280cdffca7c"} Feb 02 06:59:03 crc kubenswrapper[4842]: I0202 06:59:03.518440 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8ltqd"] Feb 02 06:59:03 crc kubenswrapper[4842]: I0202 06:59:03.519369 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:03 crc kubenswrapper[4842]: I0202 06:59:03.541369 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8ltqd"] Feb 02 06:59:03 crc kubenswrapper[4842]: I0202 06:59:03.610776 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qr5d\" (UniqueName: \"kubernetes.io/projected/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-kube-api-access-4qr5d\") pod \"redhat-operators-8ltqd\" (UID: \"ad236f43-9e37-4d8d-bdf5-838729fd7aa9\") " pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:03 crc kubenswrapper[4842]: I0202 06:59:03.610887 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-utilities\") pod \"redhat-operators-8ltqd\" (UID: \"ad236f43-9e37-4d8d-bdf5-838729fd7aa9\") " pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:03 crc kubenswrapper[4842]: I0202 06:59:03.611044 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-catalog-content\") pod \"redhat-operators-8ltqd\" (UID: \"ad236f43-9e37-4d8d-bdf5-838729fd7aa9\") " pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:03 crc kubenswrapper[4842]: I0202 06:59:03.711951 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-catalog-content\") pod \"redhat-operators-8ltqd\" (UID: \"ad236f43-9e37-4d8d-bdf5-838729fd7aa9\") " pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:03 crc kubenswrapper[4842]: I0202 06:59:03.712020 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qr5d\" (UniqueName: \"kubernetes.io/projected/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-kube-api-access-4qr5d\") pod \"redhat-operators-8ltqd\" (UID: \"ad236f43-9e37-4d8d-bdf5-838729fd7aa9\") " pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:03 crc kubenswrapper[4842]: I0202 06:59:03.712065 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-utilities\") pod \"redhat-operators-8ltqd\" (UID: \"ad236f43-9e37-4d8d-bdf5-838729fd7aa9\") " pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:03 crc kubenswrapper[4842]: I0202 06:59:03.712360 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-catalog-content\") pod \"redhat-operators-8ltqd\" (UID: \"ad236f43-9e37-4d8d-bdf5-838729fd7aa9\") " pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:03 crc kubenswrapper[4842]: I0202 06:59:03.712449 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-utilities\") pod \"redhat-operators-8ltqd\" (UID: \"ad236f43-9e37-4d8d-bdf5-838729fd7aa9\") " pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:03 crc kubenswrapper[4842]: I0202 06:59:03.736204 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qr5d\" (UniqueName: \"kubernetes.io/projected/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-kube-api-access-4qr5d\") pod \"redhat-operators-8ltqd\" (UID: \"ad236f43-9e37-4d8d-bdf5-838729fd7aa9\") " pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:03 crc kubenswrapper[4842]: I0202 06:59:03.832108 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:04 crc kubenswrapper[4842]: I0202 06:59:04.034168 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8ltqd"] Feb 02 06:59:04 crc kubenswrapper[4842]: I0202 06:59:04.549732 4842 generic.go:334] "Generic (PLEG): container finished" podID="7e244b75-9c3a-4f20-9bd7-071fb2cc7883" containerID="7524c5a8e7b6861a405892c3cbf5335926049c145ff021686acc4b0f8e96bf08" exitCode=0 Feb 02 06:59:04 crc kubenswrapper[4842]: I0202 06:59:04.549798 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" event={"ID":"7e244b75-9c3a-4f20-9bd7-071fb2cc7883","Type":"ContainerDied","Data":"7524c5a8e7b6861a405892c3cbf5335926049c145ff021686acc4b0f8e96bf08"} Feb 02 06:59:04 crc kubenswrapper[4842]: I0202 06:59:04.552257 4842 generic.go:334] "Generic (PLEG): container finished" podID="ad236f43-9e37-4d8d-bdf5-838729fd7aa9" containerID="e1b28ff34be39ba62330ea3a8164e321a1af259f18d1bac47177888c7fca820f" exitCode=0 Feb 02 06:59:04 crc kubenswrapper[4842]: I0202 06:59:04.552321 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ltqd" event={"ID":"ad236f43-9e37-4d8d-bdf5-838729fd7aa9","Type":"ContainerDied","Data":"e1b28ff34be39ba62330ea3a8164e321a1af259f18d1bac47177888c7fca820f"} Feb 02 06:59:04 crc kubenswrapper[4842]: I0202 06:59:04.552353 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ltqd" event={"ID":"ad236f43-9e37-4d8d-bdf5-838729fd7aa9","Type":"ContainerStarted","Data":"2ff52db295c35e880c88d5b5145e78895e572aab912fdc448aa89426cbd58de9"} Feb 02 06:59:05 crc kubenswrapper[4842]: I0202 06:59:05.573296 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ltqd" event={"ID":"ad236f43-9e37-4d8d-bdf5-838729fd7aa9","Type":"ContainerStarted","Data":"144e86b73203127253808ef02a958ca83742ae341b33ae851a29e2f3d4ef61f3"} Feb 02 06:59:05 crc kubenswrapper[4842]: I0202 06:59:05.580941 4842 generic.go:334] "Generic (PLEG): container finished" podID="7e244b75-9c3a-4f20-9bd7-071fb2cc7883" containerID="d6100efc6d95a57f6ea0c8740bd259b211506a1b5192e697b860d7dcd3822564" exitCode=0 Feb 02 06:59:05 crc kubenswrapper[4842]: I0202 06:59:05.580992 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" event={"ID":"7e244b75-9c3a-4f20-9bd7-071fb2cc7883","Type":"ContainerDied","Data":"d6100efc6d95a57f6ea0c8740bd259b211506a1b5192e697b860d7dcd3822564"} Feb 02 06:59:06 crc kubenswrapper[4842]: I0202 06:59:06.593708 4842 generic.go:334] "Generic (PLEG): container finished" podID="ad236f43-9e37-4d8d-bdf5-838729fd7aa9" containerID="144e86b73203127253808ef02a958ca83742ae341b33ae851a29e2f3d4ef61f3" exitCode=0 Feb 02 06:59:06 crc kubenswrapper[4842]: I0202 06:59:06.593809 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ltqd" event={"ID":"ad236f43-9e37-4d8d-bdf5-838729fd7aa9","Type":"ContainerDied","Data":"144e86b73203127253808ef02a958ca83742ae341b33ae851a29e2f3d4ef61f3"} Feb 02 06:59:06 crc kubenswrapper[4842]: I0202 06:59:06.932831 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" Feb 02 06:59:07 crc kubenswrapper[4842]: I0202 06:59:07.061749 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-bundle\") pod \"7e244b75-9c3a-4f20-9bd7-071fb2cc7883\" (UID: \"7e244b75-9c3a-4f20-9bd7-071fb2cc7883\") " Feb 02 06:59:07 crc kubenswrapper[4842]: I0202 06:59:07.061855 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-util\") pod \"7e244b75-9c3a-4f20-9bd7-071fb2cc7883\" (UID: \"7e244b75-9c3a-4f20-9bd7-071fb2cc7883\") " Feb 02 06:59:07 crc kubenswrapper[4842]: I0202 06:59:07.061903 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5c4h\" (UniqueName: \"kubernetes.io/projected/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-kube-api-access-d5c4h\") pod \"7e244b75-9c3a-4f20-9bd7-071fb2cc7883\" (UID: \"7e244b75-9c3a-4f20-9bd7-071fb2cc7883\") " Feb 02 06:59:07 crc kubenswrapper[4842]: I0202 06:59:07.062367 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-bundle" (OuterVolumeSpecName: "bundle") pod "7e244b75-9c3a-4f20-9bd7-071fb2cc7883" (UID: "7e244b75-9c3a-4f20-9bd7-071fb2cc7883"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:59:07 crc kubenswrapper[4842]: I0202 06:59:07.073035 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-kube-api-access-d5c4h" (OuterVolumeSpecName: "kube-api-access-d5c4h") pod "7e244b75-9c3a-4f20-9bd7-071fb2cc7883" (UID: "7e244b75-9c3a-4f20-9bd7-071fb2cc7883"). InnerVolumeSpecName "kube-api-access-d5c4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:59:07 crc kubenswrapper[4842]: I0202 06:59:07.099320 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-util" (OuterVolumeSpecName: "util") pod "7e244b75-9c3a-4f20-9bd7-071fb2cc7883" (UID: "7e244b75-9c3a-4f20-9bd7-071fb2cc7883"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:59:07 crc kubenswrapper[4842]: I0202 06:59:07.163448 4842 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 06:59:07 crc kubenswrapper[4842]: I0202 06:59:07.163482 4842 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-util\") on node \"crc\" DevicePath \"\"" Feb 02 06:59:07 crc kubenswrapper[4842]: I0202 06:59:07.163494 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5c4h\" (UniqueName: \"kubernetes.io/projected/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-kube-api-access-d5c4h\") on node \"crc\" DevicePath \"\"" Feb 02 06:59:07 crc kubenswrapper[4842]: I0202 06:59:07.603681 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ltqd" event={"ID":"ad236f43-9e37-4d8d-bdf5-838729fd7aa9","Type":"ContainerStarted","Data":"b357f288beaed7b2219514f3179704e90cd47a1fd0c17fcc7ad6a7a72606c46a"} Feb 02 06:59:07 crc kubenswrapper[4842]: I0202 06:59:07.615989 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" event={"ID":"7e244b75-9c3a-4f20-9bd7-071fb2cc7883","Type":"ContainerDied","Data":"d39e6b033f2cba8bf62594e0e22c48d6b1f154a990c26a31fb1ba280cdffca7c"} Feb 02 06:59:07 crc kubenswrapper[4842]: I0202 06:59:07.616057 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d39e6b033f2cba8bf62594e0e22c48d6b1f154a990c26a31fb1ba280cdffca7c" Feb 02 06:59:07 crc kubenswrapper[4842]: I0202 06:59:07.616074 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" Feb 02 06:59:07 crc kubenswrapper[4842]: I0202 06:59:07.635198 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8ltqd" podStartSLOduration=2.089322905 podStartE2EDuration="4.635177555s" podCreationTimestamp="2026-02-02 06:59:03 +0000 UTC" firstStartedPulling="2026-02-02 06:59:04.553239175 +0000 UTC m=+769.930507087" lastFinishedPulling="2026-02-02 06:59:07.099093785 +0000 UTC m=+772.476361737" observedRunningTime="2026-02-02 06:59:07.628844989 +0000 UTC m=+773.006112911" watchObservedRunningTime="2026-02-02 06:59:07.635177555 +0000 UTC m=+773.012445487" Feb 02 06:59:11 crc kubenswrapper[4842]: I0202 06:59:11.862054 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-6qznw"] Feb 02 06:59:11 crc kubenswrapper[4842]: E0202 06:59:11.862578 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e244b75-9c3a-4f20-9bd7-071fb2cc7883" containerName="pull" Feb 02 06:59:11 crc kubenswrapper[4842]: I0202 06:59:11.862597 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e244b75-9c3a-4f20-9bd7-071fb2cc7883" containerName="pull" Feb 02 06:59:11 crc kubenswrapper[4842]: E0202 06:59:11.862614 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e244b75-9c3a-4f20-9bd7-071fb2cc7883" containerName="util" Feb 02 06:59:11 crc kubenswrapper[4842]: I0202 06:59:11.862621 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e244b75-9c3a-4f20-9bd7-071fb2cc7883" containerName="util" Feb 02 06:59:11 crc kubenswrapper[4842]: E0202 06:59:11.862637 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e244b75-9c3a-4f20-9bd7-071fb2cc7883" containerName="extract" Feb 02 06:59:11 crc kubenswrapper[4842]: I0202 06:59:11.862647 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e244b75-9c3a-4f20-9bd7-071fb2cc7883" containerName="extract" Feb 02 06:59:11 crc kubenswrapper[4842]: I0202 06:59:11.862762 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e244b75-9c3a-4f20-9bd7-071fb2cc7883" containerName="extract" Feb 02 06:59:11 crc kubenswrapper[4842]: I0202 06:59:11.863136 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-6qznw" Feb 02 06:59:11 crc kubenswrapper[4842]: I0202 06:59:11.865031 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 02 06:59:11 crc kubenswrapper[4842]: I0202 06:59:11.865324 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 02 06:59:11 crc kubenswrapper[4842]: I0202 06:59:11.865398 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-fpv6k" Feb 02 06:59:11 crc kubenswrapper[4842]: I0202 06:59:11.876439 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-6qznw"] Feb 02 06:59:11 crc kubenswrapper[4842]: I0202 06:59:11.926159 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9t2h\" (UniqueName: \"kubernetes.io/projected/3e9d6ba3-9c88-4425-87b9-8a5abd664ce7-kube-api-access-b9t2h\") pod \"nmstate-operator-646758c888-6qznw\" (UID: \"3e9d6ba3-9c88-4425-87b9-8a5abd664ce7\") " pod="openshift-nmstate/nmstate-operator-646758c888-6qznw" Feb 02 06:59:12 crc kubenswrapper[4842]: I0202 06:59:12.027194 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9t2h\" (UniqueName: \"kubernetes.io/projected/3e9d6ba3-9c88-4425-87b9-8a5abd664ce7-kube-api-access-b9t2h\") pod \"nmstate-operator-646758c888-6qznw\" (UID: \"3e9d6ba3-9c88-4425-87b9-8a5abd664ce7\") " pod="openshift-nmstate/nmstate-operator-646758c888-6qznw" Feb 02 06:59:12 crc kubenswrapper[4842]: I0202 06:59:12.062673 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9t2h\" (UniqueName: \"kubernetes.io/projected/3e9d6ba3-9c88-4425-87b9-8a5abd664ce7-kube-api-access-b9t2h\") pod \"nmstate-operator-646758c888-6qznw\" (UID: \"3e9d6ba3-9c88-4425-87b9-8a5abd664ce7\") " pod="openshift-nmstate/nmstate-operator-646758c888-6qznw" Feb 02 06:59:12 crc kubenswrapper[4842]: I0202 06:59:12.185806 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-6qznw" Feb 02 06:59:12 crc kubenswrapper[4842]: I0202 06:59:12.428880 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-6qznw"] Feb 02 06:59:12 crc kubenswrapper[4842]: I0202 06:59:12.649375 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-6qznw" event={"ID":"3e9d6ba3-9c88-4425-87b9-8a5abd664ce7","Type":"ContainerStarted","Data":"36e20619a2ef69ebeef34d4e079a85e04f26457dffdd43d4fc16cce1a90fc032"} Feb 02 06:59:13 crc kubenswrapper[4842]: I0202 06:59:13.832769 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:13 crc kubenswrapper[4842]: I0202 06:59:13.833074 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:14 crc kubenswrapper[4842]: I0202 06:59:14.659711 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-6qznw" event={"ID":"3e9d6ba3-9c88-4425-87b9-8a5abd664ce7","Type":"ContainerStarted","Data":"ce7de959462d86cd7cbde251da43a9514aa907ca1c9f308f5cee35247dd9e55d"} Feb 02 06:59:14 crc kubenswrapper[4842]: I0202 06:59:14.689418 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-6qznw" podStartSLOduration=2.049395907 podStartE2EDuration="3.689393652s" podCreationTimestamp="2026-02-02 06:59:11 +0000 UTC" firstStartedPulling="2026-02-02 06:59:12.434512527 +0000 UTC m=+777.811780439" lastFinishedPulling="2026-02-02 06:59:14.074510262 +0000 UTC m=+779.451778184" observedRunningTime="2026-02-02 06:59:14.682640226 +0000 UTC m=+780.059908148" watchObservedRunningTime="2026-02-02 06:59:14.689393652 +0000 UTC m=+780.066661604" Feb 02 06:59:14 crc kubenswrapper[4842]: I0202 06:59:14.885445 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8ltqd" podUID="ad236f43-9e37-4d8d-bdf5-838729fd7aa9" containerName="registry-server" probeResult="failure" output=< Feb 02 06:59:14 crc kubenswrapper[4842]: timeout: failed to connect service ":50051" within 1s Feb 02 06:59:14 crc kubenswrapper[4842]: > Feb 02 06:59:15 crc kubenswrapper[4842]: I0202 06:59:15.840065 4842 scope.go:117] "RemoveContainer" containerID="eb46ef51b68530b7f2b8f5c7e049ebba4820dd4f4f0a8efd0feba8f483ed768d" Feb 02 06:59:16 crc kubenswrapper[4842]: I0202 06:59:16.677166 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gmkx9_c1fd21cd-ea6a-44a0-b136-f338fc97cf18/kube-multus/2.log" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.555184 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-h4nv5"] Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.555987 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-h4nv5" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.558617 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-tq46r" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.568849 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz8b6\" (UniqueName: \"kubernetes.io/projected/a4c06cff-e4b9-41be-a253-b1bf70dc1dc8-kube-api-access-rz8b6\") pod \"nmstate-metrics-54757c584b-h4nv5\" (UID: \"a4c06cff-e4b9-41be-a253-b1bf70dc1dc8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-h4nv5" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.584342 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-h4nv5"] Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.597705 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-ctgl4"] Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.598533 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ctgl4" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.601849 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.602042 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-hrqrp"] Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.603155 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-hrqrp" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.639591 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-ctgl4"] Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.670041 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz8b6\" (UniqueName: \"kubernetes.io/projected/a4c06cff-e4b9-41be-a253-b1bf70dc1dc8-kube-api-access-rz8b6\") pod \"nmstate-metrics-54757c584b-h4nv5\" (UID: \"a4c06cff-e4b9-41be-a253-b1bf70dc1dc8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-h4nv5" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.670121 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pfpl\" (UniqueName: \"kubernetes.io/projected/558d578f-dad2-4317-8efd-628e30fe306e-kube-api-access-6pfpl\") pod \"nmstate-handler-hrqrp\" (UID: \"558d578f-dad2-4317-8efd-628e30fe306e\") " pod="openshift-nmstate/nmstate-handler-hrqrp" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.670165 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/558d578f-dad2-4317-8efd-628e30fe306e-nmstate-lock\") pod \"nmstate-handler-hrqrp\" (UID: \"558d578f-dad2-4317-8efd-628e30fe306e\") " pod="openshift-nmstate/nmstate-handler-hrqrp" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.670197 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/558d578f-dad2-4317-8efd-628e30fe306e-dbus-socket\") pod \"nmstate-handler-hrqrp\" (UID: \"558d578f-dad2-4317-8efd-628e30fe306e\") " pod="openshift-nmstate/nmstate-handler-hrqrp" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.670243 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/558d578f-dad2-4317-8efd-628e30fe306e-ovs-socket\") pod \"nmstate-handler-hrqrp\" (UID: \"558d578f-dad2-4317-8efd-628e30fe306e\") " pod="openshift-nmstate/nmstate-handler-hrqrp" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.670286 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a9864264-6d23-4a03-8464-6b52a81c01d1-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-ctgl4\" (UID: \"a9864264-6d23-4a03-8464-6b52a81c01d1\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ctgl4" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.670313 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbmqd\" (UniqueName: \"kubernetes.io/projected/a9864264-6d23-4a03-8464-6b52a81c01d1-kube-api-access-tbmqd\") pod \"nmstate-webhook-8474b5b9d8-ctgl4\" (UID: \"a9864264-6d23-4a03-8464-6b52a81c01d1\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ctgl4" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.695663 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz8b6\" (UniqueName: \"kubernetes.io/projected/a4c06cff-e4b9-41be-a253-b1bf70dc1dc8-kube-api-access-rz8b6\") pod \"nmstate-metrics-54757c584b-h4nv5\" (UID: \"a4c06cff-e4b9-41be-a253-b1bf70dc1dc8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-h4nv5" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.708592 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2"] Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.709347 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.711687 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-n7j2j" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.711949 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.712150 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.721203 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2"] Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.771032 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pfpl\" (UniqueName: \"kubernetes.io/projected/558d578f-dad2-4317-8efd-628e30fe306e-kube-api-access-6pfpl\") pod \"nmstate-handler-hrqrp\" (UID: \"558d578f-dad2-4317-8efd-628e30fe306e\") " pod="openshift-nmstate/nmstate-handler-hrqrp" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.771090 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/558d578f-dad2-4317-8efd-628e30fe306e-nmstate-lock\") pod \"nmstate-handler-hrqrp\" (UID: \"558d578f-dad2-4317-8efd-628e30fe306e\") " pod="openshift-nmstate/nmstate-handler-hrqrp" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.771119 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpf6k\" (UniqueName: \"kubernetes.io/projected/1875099f-a0f5-4ba0-b757-35755a6d0bcd-kube-api-access-hpf6k\") pod \"nmstate-console-plugin-7754f76f8b-z2jg2\" (UID: \"1875099f-a0f5-4ba0-b757-35755a6d0bcd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.771151 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1875099f-a0f5-4ba0-b757-35755a6d0bcd-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-z2jg2\" (UID: \"1875099f-a0f5-4ba0-b757-35755a6d0bcd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.771171 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/558d578f-dad2-4317-8efd-628e30fe306e-dbus-socket\") pod \"nmstate-handler-hrqrp\" (UID: \"558d578f-dad2-4317-8efd-628e30fe306e\") " pod="openshift-nmstate/nmstate-handler-hrqrp" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.771170 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/558d578f-dad2-4317-8efd-628e30fe306e-nmstate-lock\") pod \"nmstate-handler-hrqrp\" (UID: \"558d578f-dad2-4317-8efd-628e30fe306e\") " pod="openshift-nmstate/nmstate-handler-hrqrp" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.771187 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1875099f-a0f5-4ba0-b757-35755a6d0bcd-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-z2jg2\" (UID: \"1875099f-a0f5-4ba0-b757-35755a6d0bcd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.771240 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/558d578f-dad2-4317-8efd-628e30fe306e-ovs-socket\") pod \"nmstate-handler-hrqrp\" (UID: \"558d578f-dad2-4317-8efd-628e30fe306e\") " pod="openshift-nmstate/nmstate-handler-hrqrp" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.771289 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a9864264-6d23-4a03-8464-6b52a81c01d1-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-ctgl4\" (UID: \"a9864264-6d23-4a03-8464-6b52a81c01d1\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ctgl4" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.771299 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/558d578f-dad2-4317-8efd-628e30fe306e-ovs-socket\") pod \"nmstate-handler-hrqrp\" (UID: \"558d578f-dad2-4317-8efd-628e30fe306e\") " pod="openshift-nmstate/nmstate-handler-hrqrp" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.771311 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbmqd\" (UniqueName: \"kubernetes.io/projected/a9864264-6d23-4a03-8464-6b52a81c01d1-kube-api-access-tbmqd\") pod \"nmstate-webhook-8474b5b9d8-ctgl4\" (UID: \"a9864264-6d23-4a03-8464-6b52a81c01d1\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ctgl4" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.771468 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/558d578f-dad2-4317-8efd-628e30fe306e-dbus-socket\") pod \"nmstate-handler-hrqrp\" (UID: \"558d578f-dad2-4317-8efd-628e30fe306e\") " pod="openshift-nmstate/nmstate-handler-hrqrp" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.775850 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a9864264-6d23-4a03-8464-6b52a81c01d1-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-ctgl4\" (UID: \"a9864264-6d23-4a03-8464-6b52a81c01d1\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ctgl4" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.794913 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbmqd\" (UniqueName: \"kubernetes.io/projected/a9864264-6d23-4a03-8464-6b52a81c01d1-kube-api-access-tbmqd\") pod \"nmstate-webhook-8474b5b9d8-ctgl4\" (UID: \"a9864264-6d23-4a03-8464-6b52a81c01d1\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ctgl4" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.795060 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pfpl\" (UniqueName: \"kubernetes.io/projected/558d578f-dad2-4317-8efd-628e30fe306e-kube-api-access-6pfpl\") pod \"nmstate-handler-hrqrp\" (UID: \"558d578f-dad2-4317-8efd-628e30fe306e\") " pod="openshift-nmstate/nmstate-handler-hrqrp" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.871866 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1875099f-a0f5-4ba0-b757-35755a6d0bcd-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-z2jg2\" (UID: \"1875099f-a0f5-4ba0-b757-35755a6d0bcd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.871923 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpf6k\" (UniqueName: \"kubernetes.io/projected/1875099f-a0f5-4ba0-b757-35755a6d0bcd-kube-api-access-hpf6k\") pod \"nmstate-console-plugin-7754f76f8b-z2jg2\" (UID: \"1875099f-a0f5-4ba0-b757-35755a6d0bcd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.871948 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1875099f-a0f5-4ba0-b757-35755a6d0bcd-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-z2jg2\" (UID: \"1875099f-a0f5-4ba0-b757-35755a6d0bcd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2" Feb 02 06:59:21 crc kubenswrapper[4842]: E0202 06:59:21.872101 4842 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 02 06:59:21 crc kubenswrapper[4842]: E0202 06:59:21.872159 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1875099f-a0f5-4ba0-b757-35755a6d0bcd-plugin-serving-cert podName:1875099f-a0f5-4ba0-b757-35755a6d0bcd nodeName:}" failed. No retries permitted until 2026-02-02 06:59:22.372140773 +0000 UTC m=+787.749408685 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/1875099f-a0f5-4ba0-b757-35755a6d0bcd-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-z2jg2" (UID: "1875099f-a0f5-4ba0-b757-35755a6d0bcd") : secret "plugin-serving-cert" not found Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.872928 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1875099f-a0f5-4ba0-b757-35755a6d0bcd-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-z2jg2\" (UID: \"1875099f-a0f5-4ba0-b757-35755a6d0bcd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.879056 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-h4nv5" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.898664 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpf6k\" (UniqueName: \"kubernetes.io/projected/1875099f-a0f5-4ba0-b757-35755a6d0bcd-kube-api-access-hpf6k\") pod \"nmstate-console-plugin-7754f76f8b-z2jg2\" (UID: \"1875099f-a0f5-4ba0-b757-35755a6d0bcd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.914110 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-655b6b84f6-kkbsq"] Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.914731 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.922984 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-655b6b84f6-kkbsq"] Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.940554 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ctgl4" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.952420 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-hrqrp" Feb 02 06:59:21 crc kubenswrapper[4842]: W0202 06:59:21.972178 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod558d578f_dad2_4317_8efd_628e30fe306e.slice/crio-da157c5e0bbd98d95a22226d2f29c9527e0d821a20ada6e2f1875ca3b76c1ab1 WatchSource:0}: Error finding container da157c5e0bbd98d95a22226d2f29c9527e0d821a20ada6e2f1875ca3b76c1ab1: Status 404 returned error can't find the container with id da157c5e0bbd98d95a22226d2f29c9527e0d821a20ada6e2f1875ca3b76c1ab1 Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.972896 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ec80d70-e53d-4045-a2b7-a61ad0464be2-trusted-ca-bundle\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.972947 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0ec80d70-e53d-4045-a2b7-a61ad0464be2-console-config\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.972971 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0ec80d70-e53d-4045-a2b7-a61ad0464be2-console-oauth-config\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.973005 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ec80d70-e53d-4045-a2b7-a61ad0464be2-console-serving-cert\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.973026 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh27l\" (UniqueName: \"kubernetes.io/projected/0ec80d70-e53d-4045-a2b7-a61ad0464be2-kube-api-access-gh27l\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.973268 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0ec80d70-e53d-4045-a2b7-a61ad0464be2-service-ca\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.973304 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0ec80d70-e53d-4045-a2b7-a61ad0464be2-oauth-serving-cert\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.074151 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0ec80d70-e53d-4045-a2b7-a61ad0464be2-service-ca\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.074198 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0ec80d70-e53d-4045-a2b7-a61ad0464be2-oauth-serving-cert\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.074247 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ec80d70-e53d-4045-a2b7-a61ad0464be2-trusted-ca-bundle\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.074266 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0ec80d70-e53d-4045-a2b7-a61ad0464be2-console-config\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.074291 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0ec80d70-e53d-4045-a2b7-a61ad0464be2-console-oauth-config\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.074317 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ec80d70-e53d-4045-a2b7-a61ad0464be2-console-serving-cert\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.074331 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh27l\" (UniqueName: \"kubernetes.io/projected/0ec80d70-e53d-4045-a2b7-a61ad0464be2-kube-api-access-gh27l\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.075524 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0ec80d70-e53d-4045-a2b7-a61ad0464be2-service-ca\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.075588 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ec80d70-e53d-4045-a2b7-a61ad0464be2-trusted-ca-bundle\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.075648 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0ec80d70-e53d-4045-a2b7-a61ad0464be2-console-config\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.075647 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0ec80d70-e53d-4045-a2b7-a61ad0464be2-oauth-serving-cert\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.080496 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ec80d70-e53d-4045-a2b7-a61ad0464be2-console-serving-cert\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.081167 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0ec80d70-e53d-4045-a2b7-a61ad0464be2-console-oauth-config\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.095734 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh27l\" (UniqueName: \"kubernetes.io/projected/0ec80d70-e53d-4045-a2b7-a61ad0464be2-kube-api-access-gh27l\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.114428 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-h4nv5"] Feb 02 06:59:22 crc kubenswrapper[4842]: W0202 06:59:22.157159 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9864264_6d23_4a03_8464_6b52a81c01d1.slice/crio-5efc19c6eb39e7cc922e7169278eab47fa90d09bbd259348e66ebdd81a4848d3 WatchSource:0}: Error finding container 5efc19c6eb39e7cc922e7169278eab47fa90d09bbd259348e66ebdd81a4848d3: Status 404 returned error can't find the container with id 5efc19c6eb39e7cc922e7169278eab47fa90d09bbd259348e66ebdd81a4848d3 Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.157440 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-ctgl4"] Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.237618 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.377750 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1875099f-a0f5-4ba0-b757-35755a6d0bcd-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-z2jg2\" (UID: \"1875099f-a0f5-4ba0-b757-35755a6d0bcd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.383257 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1875099f-a0f5-4ba0-b757-35755a6d0bcd-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-z2jg2\" (UID: \"1875099f-a0f5-4ba0-b757-35755a6d0bcd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.493610 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-655b6b84f6-kkbsq"] Feb 02 06:59:22 crc kubenswrapper[4842]: W0202 06:59:22.502042 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ec80d70_e53d_4045_a2b7_a61ad0464be2.slice/crio-584c7eb65561d3875bbd94d5a62fa8945477f120bcd1dc59f7a380cfb3ed57a2 WatchSource:0}: Error finding container 584c7eb65561d3875bbd94d5a62fa8945477f120bcd1dc59f7a380cfb3ed57a2: Status 404 returned error can't find the container with id 584c7eb65561d3875bbd94d5a62fa8945477f120bcd1dc59f7a380cfb3ed57a2 Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.628060 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.809337 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-hrqrp" event={"ID":"558d578f-dad2-4317-8efd-628e30fe306e","Type":"ContainerStarted","Data":"da157c5e0bbd98d95a22226d2f29c9527e0d821a20ada6e2f1875ca3b76c1ab1"} Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.810532 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ctgl4" event={"ID":"a9864264-6d23-4a03-8464-6b52a81c01d1","Type":"ContainerStarted","Data":"5efc19c6eb39e7cc922e7169278eab47fa90d09bbd259348e66ebdd81a4848d3"} Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.811362 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-h4nv5" event={"ID":"a4c06cff-e4b9-41be-a253-b1bf70dc1dc8","Type":"ContainerStarted","Data":"d5c227670e422b5990993f1600b1bea0cd17c7c7b853776d7ab084a6a609bd35"} Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.817605 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-655b6b84f6-kkbsq" event={"ID":"0ec80d70-e53d-4045-a2b7-a61ad0464be2","Type":"ContainerStarted","Data":"1f12aadf3b7e1323934209948c494ece5269230bd67c7ca6fcf6d600c8771f87"} Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.817676 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-655b6b84f6-kkbsq" event={"ID":"0ec80d70-e53d-4045-a2b7-a61ad0464be2","Type":"ContainerStarted","Data":"584c7eb65561d3875bbd94d5a62fa8945477f120bcd1dc59f7a380cfb3ed57a2"} Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.843355 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-655b6b84f6-kkbsq" podStartSLOduration=1.843334548 podStartE2EDuration="1.843334548s" podCreationTimestamp="2026-02-02 06:59:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:59:22.839658998 +0000 UTC m=+788.216926960" watchObservedRunningTime="2026-02-02 06:59:22.843334548 +0000 UTC m=+788.220602470" Feb 02 06:59:23 crc kubenswrapper[4842]: I0202 06:59:23.145334 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2"] Feb 02 06:59:23 crc kubenswrapper[4842]: I0202 06:59:23.825521 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2" event={"ID":"1875099f-a0f5-4ba0-b757-35755a6d0bcd","Type":"ContainerStarted","Data":"110ec542951fe4790c0535382759d19648b9b0a377f494779d0c7737e915394e"} Feb 02 06:59:23 crc kubenswrapper[4842]: I0202 06:59:23.875124 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:23 crc kubenswrapper[4842]: I0202 06:59:23.930774 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:24 crc kubenswrapper[4842]: I0202 06:59:24.113749 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8ltqd"] Feb 02 06:59:24 crc kubenswrapper[4842]: I0202 06:59:24.834147 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-hrqrp" event={"ID":"558d578f-dad2-4317-8efd-628e30fe306e","Type":"ContainerStarted","Data":"0f300c6d6e33c749d7219cadd322664a665ec3a7802a87d38b6295954a1c8fa7"} Feb 02 06:59:24 crc kubenswrapper[4842]: I0202 06:59:24.835107 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-hrqrp" Feb 02 06:59:24 crc kubenswrapper[4842]: I0202 06:59:24.836031 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ctgl4" event={"ID":"a9864264-6d23-4a03-8464-6b52a81c01d1","Type":"ContainerStarted","Data":"20420eabf4b1ba3779ed32b3886f9d38b14c1b9345038674f9ae67804f3490f9"} Feb 02 06:59:24 crc kubenswrapper[4842]: I0202 06:59:24.836180 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ctgl4" Feb 02 06:59:24 crc kubenswrapper[4842]: I0202 06:59:24.846432 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-h4nv5" event={"ID":"a4c06cff-e4b9-41be-a253-b1bf70dc1dc8","Type":"ContainerStarted","Data":"fdde21f5e752186e690bd5c8bf4cda9461c234ab211e8e6deb9eb468de2ae398"} Feb 02 06:59:24 crc kubenswrapper[4842]: I0202 06:59:24.849698 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-hrqrp" podStartSLOduration=1.7119673930000001 podStartE2EDuration="3.849686534s" podCreationTimestamp="2026-02-02 06:59:21 +0000 UTC" firstStartedPulling="2026-02-02 06:59:21.980338717 +0000 UTC m=+787.357606629" lastFinishedPulling="2026-02-02 06:59:24.118057818 +0000 UTC m=+789.495325770" observedRunningTime="2026-02-02 06:59:24.848357301 +0000 UTC m=+790.225625224" watchObservedRunningTime="2026-02-02 06:59:24.849686534 +0000 UTC m=+790.226954456" Feb 02 06:59:24 crc kubenswrapper[4842]: I0202 06:59:24.881124 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ctgl4" podStartSLOduration=1.85111568 podStartE2EDuration="3.881098218s" podCreationTimestamp="2026-02-02 06:59:21 +0000 UTC" firstStartedPulling="2026-02-02 06:59:22.159610392 +0000 UTC m=+787.536878304" lastFinishedPulling="2026-02-02 06:59:24.18959289 +0000 UTC m=+789.566860842" observedRunningTime="2026-02-02 06:59:24.870081647 +0000 UTC m=+790.247349569" watchObservedRunningTime="2026-02-02 06:59:24.881098218 +0000 UTC m=+790.258366170" Feb 02 06:59:25 crc kubenswrapper[4842]: I0202 06:59:25.853947 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2" event={"ID":"1875099f-a0f5-4ba0-b757-35755a6d0bcd","Type":"ContainerStarted","Data":"be97fab3866ef62f2e834b9c7047a4e6708c1bf5d2e9069a74fc6c1c53dea188"} Feb 02 06:59:25 crc kubenswrapper[4842]: I0202 06:59:25.854801 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8ltqd" podUID="ad236f43-9e37-4d8d-bdf5-838729fd7aa9" containerName="registry-server" containerID="cri-o://b357f288beaed7b2219514f3179704e90cd47a1fd0c17fcc7ad6a7a72606c46a" gracePeriod=2 Feb 02 06:59:25 crc kubenswrapper[4842]: I0202 06:59:25.877652 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2" podStartSLOduration=2.849464683 podStartE2EDuration="4.877633996s" podCreationTimestamp="2026-02-02 06:59:21 +0000 UTC" firstStartedPulling="2026-02-02 06:59:23.154776377 +0000 UTC m=+788.532044289" lastFinishedPulling="2026-02-02 06:59:25.18294565 +0000 UTC m=+790.560213602" observedRunningTime="2026-02-02 06:59:25.871878515 +0000 UTC m=+791.249146467" watchObservedRunningTime="2026-02-02 06:59:25.877633996 +0000 UTC m=+791.254901908" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.428694 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.540022 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-utilities\") pod \"ad236f43-9e37-4d8d-bdf5-838729fd7aa9\" (UID: \"ad236f43-9e37-4d8d-bdf5-838729fd7aa9\") " Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.540442 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-catalog-content\") pod \"ad236f43-9e37-4d8d-bdf5-838729fd7aa9\" (UID: \"ad236f43-9e37-4d8d-bdf5-838729fd7aa9\") " Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.540530 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qr5d\" (UniqueName: \"kubernetes.io/projected/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-kube-api-access-4qr5d\") pod \"ad236f43-9e37-4d8d-bdf5-838729fd7aa9\" (UID: \"ad236f43-9e37-4d8d-bdf5-838729fd7aa9\") " Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.542804 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-utilities" (OuterVolumeSpecName: "utilities") pod "ad236f43-9e37-4d8d-bdf5-838729fd7aa9" (UID: "ad236f43-9e37-4d8d-bdf5-838729fd7aa9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.549069 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-kube-api-access-4qr5d" (OuterVolumeSpecName: "kube-api-access-4qr5d") pod "ad236f43-9e37-4d8d-bdf5-838729fd7aa9" (UID: "ad236f43-9e37-4d8d-bdf5-838729fd7aa9"). InnerVolumeSpecName "kube-api-access-4qr5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.642443 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qr5d\" (UniqueName: \"kubernetes.io/projected/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-kube-api-access-4qr5d\") on node \"crc\" DevicePath \"\"" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.642488 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.676350 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad236f43-9e37-4d8d-bdf5-838729fd7aa9" (UID: "ad236f43-9e37-4d8d-bdf5-838729fd7aa9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.744251 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.865390 4842 generic.go:334] "Generic (PLEG): container finished" podID="ad236f43-9e37-4d8d-bdf5-838729fd7aa9" containerID="b357f288beaed7b2219514f3179704e90cd47a1fd0c17fcc7ad6a7a72606c46a" exitCode=0 Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.865497 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ltqd" event={"ID":"ad236f43-9e37-4d8d-bdf5-838729fd7aa9","Type":"ContainerDied","Data":"b357f288beaed7b2219514f3179704e90cd47a1fd0c17fcc7ad6a7a72606c46a"} Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.865539 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.865568 4842 scope.go:117] "RemoveContainer" containerID="b357f288beaed7b2219514f3179704e90cd47a1fd0c17fcc7ad6a7a72606c46a" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.865550 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ltqd" event={"ID":"ad236f43-9e37-4d8d-bdf5-838729fd7aa9","Type":"ContainerDied","Data":"2ff52db295c35e880c88d5b5145e78895e572aab912fdc448aa89426cbd58de9"} Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.870004 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-h4nv5" event={"ID":"a4c06cff-e4b9-41be-a253-b1bf70dc1dc8","Type":"ContainerStarted","Data":"fb79482d9e32256b7087d6b760a4494bb83759f7391627be6bb55f49a53136b9"} Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.900688 4842 scope.go:117] "RemoveContainer" containerID="144e86b73203127253808ef02a958ca83742ae341b33ae851a29e2f3d4ef61f3" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.922633 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-h4nv5" podStartSLOduration=1.776941543 podStartE2EDuration="5.922605659s" podCreationTimestamp="2026-02-02 06:59:21 +0000 UTC" firstStartedPulling="2026-02-02 06:59:22.115498725 +0000 UTC m=+787.492766627" lastFinishedPulling="2026-02-02 06:59:26.261162831 +0000 UTC m=+791.638430743" observedRunningTime="2026-02-02 06:59:26.903403076 +0000 UTC m=+792.280671068" watchObservedRunningTime="2026-02-02 06:59:26.922605659 +0000 UTC m=+792.299873601" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.936905 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8ltqd"] Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.944813 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8ltqd"] Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.951982 4842 scope.go:117] "RemoveContainer" containerID="e1b28ff34be39ba62330ea3a8164e321a1af259f18d1bac47177888c7fca820f" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.968980 4842 scope.go:117] "RemoveContainer" containerID="b357f288beaed7b2219514f3179704e90cd47a1fd0c17fcc7ad6a7a72606c46a" Feb 02 06:59:26 crc kubenswrapper[4842]: E0202 06:59:26.969708 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b357f288beaed7b2219514f3179704e90cd47a1fd0c17fcc7ad6a7a72606c46a\": container with ID starting with b357f288beaed7b2219514f3179704e90cd47a1fd0c17fcc7ad6a7a72606c46a not found: ID does not exist" containerID="b357f288beaed7b2219514f3179704e90cd47a1fd0c17fcc7ad6a7a72606c46a" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.969769 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b357f288beaed7b2219514f3179704e90cd47a1fd0c17fcc7ad6a7a72606c46a"} err="failed to get container status \"b357f288beaed7b2219514f3179704e90cd47a1fd0c17fcc7ad6a7a72606c46a\": rpc error: code = NotFound desc = could not find container \"b357f288beaed7b2219514f3179704e90cd47a1fd0c17fcc7ad6a7a72606c46a\": container with ID starting with b357f288beaed7b2219514f3179704e90cd47a1fd0c17fcc7ad6a7a72606c46a not found: ID does not exist" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.969809 4842 scope.go:117] "RemoveContainer" containerID="144e86b73203127253808ef02a958ca83742ae341b33ae851a29e2f3d4ef61f3" Feb 02 06:59:26 crc kubenswrapper[4842]: E0202 06:59:26.970337 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"144e86b73203127253808ef02a958ca83742ae341b33ae851a29e2f3d4ef61f3\": container with ID starting with 144e86b73203127253808ef02a958ca83742ae341b33ae851a29e2f3d4ef61f3 not found: ID does not exist" containerID="144e86b73203127253808ef02a958ca83742ae341b33ae851a29e2f3d4ef61f3" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.970380 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"144e86b73203127253808ef02a958ca83742ae341b33ae851a29e2f3d4ef61f3"} err="failed to get container status \"144e86b73203127253808ef02a958ca83742ae341b33ae851a29e2f3d4ef61f3\": rpc error: code = NotFound desc = could not find container \"144e86b73203127253808ef02a958ca83742ae341b33ae851a29e2f3d4ef61f3\": container with ID starting with 144e86b73203127253808ef02a958ca83742ae341b33ae851a29e2f3d4ef61f3 not found: ID does not exist" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.970413 4842 scope.go:117] "RemoveContainer" containerID="e1b28ff34be39ba62330ea3a8164e321a1af259f18d1bac47177888c7fca820f" Feb 02 06:59:26 crc kubenswrapper[4842]: E0202 06:59:26.970774 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1b28ff34be39ba62330ea3a8164e321a1af259f18d1bac47177888c7fca820f\": container with ID starting with e1b28ff34be39ba62330ea3a8164e321a1af259f18d1bac47177888c7fca820f not found: ID does not exist" containerID="e1b28ff34be39ba62330ea3a8164e321a1af259f18d1bac47177888c7fca820f" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.970802 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1b28ff34be39ba62330ea3a8164e321a1af259f18d1bac47177888c7fca820f"} err="failed to get container status \"e1b28ff34be39ba62330ea3a8164e321a1af259f18d1bac47177888c7fca820f\": rpc error: code = NotFound desc = could not find container \"e1b28ff34be39ba62330ea3a8164e321a1af259f18d1bac47177888c7fca820f\": container with ID starting with e1b28ff34be39ba62330ea3a8164e321a1af259f18d1bac47177888c7fca820f not found: ID does not exist" Feb 02 06:59:27 crc kubenswrapper[4842]: E0202 06:59:27.001818 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad236f43_9e37_4d8d_bdf5_838729fd7aa9.slice/crio-2ff52db295c35e880c88d5b5145e78895e572aab912fdc448aa89426cbd58de9\": RecentStats: unable to find data in memory cache]" Feb 02 06:59:27 crc kubenswrapper[4842]: I0202 06:59:27.447644 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad236f43-9e37-4d8d-bdf5-838729fd7aa9" path="/var/lib/kubelet/pods/ad236f43-9e37-4d8d-bdf5-838729fd7aa9/volumes" Feb 02 06:59:31 crc kubenswrapper[4842]: I0202 06:59:31.988458 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-hrqrp" Feb 02 06:59:32 crc kubenswrapper[4842]: I0202 06:59:32.237993 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:32 crc kubenswrapper[4842]: I0202 06:59:32.238071 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:32 crc kubenswrapper[4842]: I0202 06:59:32.245258 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:32 crc kubenswrapper[4842]: I0202 06:59:32.933409 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:32 crc kubenswrapper[4842]: I0202 06:59:32.997987 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-kmw8f"] Feb 02 06:59:41 crc kubenswrapper[4842]: I0202 06:59:41.948654 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ctgl4" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.323667 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp"] Feb 02 06:59:56 crc kubenswrapper[4842]: E0202 06:59:56.324588 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad236f43-9e37-4d8d-bdf5-838729fd7aa9" containerName="registry-server" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.324609 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad236f43-9e37-4d8d-bdf5-838729fd7aa9" containerName="registry-server" Feb 02 06:59:56 crc kubenswrapper[4842]: E0202 06:59:56.324626 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad236f43-9e37-4d8d-bdf5-838729fd7aa9" containerName="extract-utilities" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.324639 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad236f43-9e37-4d8d-bdf5-838729fd7aa9" containerName="extract-utilities" Feb 02 06:59:56 crc kubenswrapper[4842]: E0202 06:59:56.324672 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad236f43-9e37-4d8d-bdf5-838729fd7aa9" containerName="extract-content" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.324685 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad236f43-9e37-4d8d-bdf5-838729fd7aa9" containerName="extract-content" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.324878 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad236f43-9e37-4d8d-bdf5-838729fd7aa9" containerName="registry-server" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.326155 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.332994 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.333178 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp"] Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.494368 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb4e0f2b-3826-4669-8732-05eb885adfe5-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp\" (UID: \"bb4e0f2b-3826-4669-8732-05eb885adfe5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.494801 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb4e0f2b-3826-4669-8732-05eb885adfe5-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp\" (UID: \"bb4e0f2b-3826-4669-8732-05eb885adfe5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.494883 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgn7x\" (UniqueName: \"kubernetes.io/projected/bb4e0f2b-3826-4669-8732-05eb885adfe5-kube-api-access-zgn7x\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp\" (UID: \"bb4e0f2b-3826-4669-8732-05eb885adfe5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.596803 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgn7x\" (UniqueName: \"kubernetes.io/projected/bb4e0f2b-3826-4669-8732-05eb885adfe5-kube-api-access-zgn7x\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp\" (UID: \"bb4e0f2b-3826-4669-8732-05eb885adfe5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.596969 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb4e0f2b-3826-4669-8732-05eb885adfe5-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp\" (UID: \"bb4e0f2b-3826-4669-8732-05eb885adfe5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.597031 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb4e0f2b-3826-4669-8732-05eb885adfe5-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp\" (UID: \"bb4e0f2b-3826-4669-8732-05eb885adfe5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.598039 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb4e0f2b-3826-4669-8732-05eb885adfe5-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp\" (UID: \"bb4e0f2b-3826-4669-8732-05eb885adfe5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.598193 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb4e0f2b-3826-4669-8732-05eb885adfe5-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp\" (UID: \"bb4e0f2b-3826-4669-8732-05eb885adfe5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.633179 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgn7x\" (UniqueName: \"kubernetes.io/projected/bb4e0f2b-3826-4669-8732-05eb885adfe5-kube-api-access-zgn7x\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp\" (UID: \"bb4e0f2b-3826-4669-8732-05eb885adfe5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.657622 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" Feb 02 06:59:57 crc kubenswrapper[4842]: I0202 06:59:57.170091 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp"] Feb 02 06:59:57 crc kubenswrapper[4842]: W0202 06:59:57.184724 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb4e0f2b_3826_4669_8732_05eb885adfe5.slice/crio-6730659c3a7373b1a89b3d0bb6b20152699850dfd1a17dcbce4ec3f7dadec6b4 WatchSource:0}: Error finding container 6730659c3a7373b1a89b3d0bb6b20152699850dfd1a17dcbce4ec3f7dadec6b4: Status 404 returned error can't find the container with id 6730659c3a7373b1a89b3d0bb6b20152699850dfd1a17dcbce4ec3f7dadec6b4 Feb 02 06:59:57 crc kubenswrapper[4842]: E0202 06:59:57.524385 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb4e0f2b_3826_4669_8732_05eb885adfe5.slice/crio-ed092ac8bf4cbba920b50ca964aa67edb99175fc3f707a1dbf75a3945e77fedf.scope\": RecentStats: unable to find data in memory cache]" Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.067895 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-kmw8f" podUID="59990591-2248-489b-bac2-e7cab22482f8" containerName="console" containerID="cri-o://87c6b411dfe277d9ab669c640478cf0b6070af5d629655273a23697ab8ba0434" gracePeriod=15 Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.111333 4842 generic.go:334] "Generic (PLEG): container finished" podID="bb4e0f2b-3826-4669-8732-05eb885adfe5" containerID="ed092ac8bf4cbba920b50ca964aa67edb99175fc3f707a1dbf75a3945e77fedf" exitCode=0 Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.111485 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" event={"ID":"bb4e0f2b-3826-4669-8732-05eb885adfe5","Type":"ContainerDied","Data":"ed092ac8bf4cbba920b50ca964aa67edb99175fc3f707a1dbf75a3945e77fedf"} Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.111539 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" event={"ID":"bb4e0f2b-3826-4669-8732-05eb885adfe5","Type":"ContainerStarted","Data":"6730659c3a7373b1a89b3d0bb6b20152699850dfd1a17dcbce4ec3f7dadec6b4"} Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.521994 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-kmw8f_59990591-2248-489b-bac2-e7cab22482f8/console/0.log" Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.522392 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.624458 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpmb2\" (UniqueName: \"kubernetes.io/projected/59990591-2248-489b-bac2-e7cab22482f8-kube-api-access-wpmb2\") pod \"59990591-2248-489b-bac2-e7cab22482f8\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.624503 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-console-config\") pod \"59990591-2248-489b-bac2-e7cab22482f8\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.624552 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-service-ca\") pod \"59990591-2248-489b-bac2-e7cab22482f8\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.624573 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-oauth-serving-cert\") pod \"59990591-2248-489b-bac2-e7cab22482f8\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.624605 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/59990591-2248-489b-bac2-e7cab22482f8-console-serving-cert\") pod \"59990591-2248-489b-bac2-e7cab22482f8\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.624659 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-trusted-ca-bundle\") pod \"59990591-2248-489b-bac2-e7cab22482f8\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.624744 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/59990591-2248-489b-bac2-e7cab22482f8-console-oauth-config\") pod \"59990591-2248-489b-bac2-e7cab22482f8\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.626079 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-service-ca" (OuterVolumeSpecName: "service-ca") pod "59990591-2248-489b-bac2-e7cab22482f8" (UID: "59990591-2248-489b-bac2-e7cab22482f8"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.626158 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-console-config" (OuterVolumeSpecName: "console-config") pod "59990591-2248-489b-bac2-e7cab22482f8" (UID: "59990591-2248-489b-bac2-e7cab22482f8"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.626190 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "59990591-2248-489b-bac2-e7cab22482f8" (UID: "59990591-2248-489b-bac2-e7cab22482f8"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.626251 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "59990591-2248-489b-bac2-e7cab22482f8" (UID: "59990591-2248-489b-bac2-e7cab22482f8"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.643540 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59990591-2248-489b-bac2-e7cab22482f8-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "59990591-2248-489b-bac2-e7cab22482f8" (UID: "59990591-2248-489b-bac2-e7cab22482f8"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.644806 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59990591-2248-489b-bac2-e7cab22482f8-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "59990591-2248-489b-bac2-e7cab22482f8" (UID: "59990591-2248-489b-bac2-e7cab22482f8"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.645003 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59990591-2248-489b-bac2-e7cab22482f8-kube-api-access-wpmb2" (OuterVolumeSpecName: "kube-api-access-wpmb2") pod "59990591-2248-489b-bac2-e7cab22482f8" (UID: "59990591-2248-489b-bac2-e7cab22482f8"). InnerVolumeSpecName "kube-api-access-wpmb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.726150 4842 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/59990591-2248-489b-bac2-e7cab22482f8-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.726247 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpmb2\" (UniqueName: \"kubernetes.io/projected/59990591-2248-489b-bac2-e7cab22482f8-kube-api-access-wpmb2\") on node \"crc\" DevicePath \"\"" Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.726279 4842 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.726304 4842 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.726333 4842 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.726361 4842 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/59990591-2248-489b-bac2-e7cab22482f8-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.726385 4842 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 06:59:59 crc kubenswrapper[4842]: I0202 06:59:59.120563 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-kmw8f_59990591-2248-489b-bac2-e7cab22482f8/console/0.log" Feb 02 06:59:59 crc kubenswrapper[4842]: I0202 06:59:59.120637 4842 generic.go:334] "Generic (PLEG): container finished" podID="59990591-2248-489b-bac2-e7cab22482f8" containerID="87c6b411dfe277d9ab669c640478cf0b6070af5d629655273a23697ab8ba0434" exitCode=2 Feb 02 06:59:59 crc kubenswrapper[4842]: I0202 06:59:59.120680 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kmw8f" event={"ID":"59990591-2248-489b-bac2-e7cab22482f8","Type":"ContainerDied","Data":"87c6b411dfe277d9ab669c640478cf0b6070af5d629655273a23697ab8ba0434"} Feb 02 06:59:59 crc kubenswrapper[4842]: I0202 06:59:59.120716 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kmw8f" event={"ID":"59990591-2248-489b-bac2-e7cab22482f8","Type":"ContainerDied","Data":"f626d676ce0b2dbd85f858b166fb0050d475783a83143a42e19f369ae37353e6"} Feb 02 06:59:59 crc kubenswrapper[4842]: I0202 06:59:59.120746 4842 scope.go:117] "RemoveContainer" containerID="87c6b411dfe277d9ab669c640478cf0b6070af5d629655273a23697ab8ba0434" Feb 02 06:59:59 crc kubenswrapper[4842]: I0202 06:59:59.120909 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:59:59 crc kubenswrapper[4842]: I0202 06:59:59.172535 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-kmw8f"] Feb 02 06:59:59 crc kubenswrapper[4842]: I0202 06:59:59.180537 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-kmw8f"] Feb 02 06:59:59 crc kubenswrapper[4842]: I0202 06:59:59.228641 4842 scope.go:117] "RemoveContainer" containerID="87c6b411dfe277d9ab669c640478cf0b6070af5d629655273a23697ab8ba0434" Feb 02 06:59:59 crc kubenswrapper[4842]: E0202 06:59:59.229262 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87c6b411dfe277d9ab669c640478cf0b6070af5d629655273a23697ab8ba0434\": container with ID starting with 87c6b411dfe277d9ab669c640478cf0b6070af5d629655273a23697ab8ba0434 not found: ID does not exist" containerID="87c6b411dfe277d9ab669c640478cf0b6070af5d629655273a23697ab8ba0434" Feb 02 06:59:59 crc kubenswrapper[4842]: I0202 06:59:59.229302 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87c6b411dfe277d9ab669c640478cf0b6070af5d629655273a23697ab8ba0434"} err="failed to get container status \"87c6b411dfe277d9ab669c640478cf0b6070af5d629655273a23697ab8ba0434\": rpc error: code = NotFound desc = could not find container \"87c6b411dfe277d9ab669c640478cf0b6070af5d629655273a23697ab8ba0434\": container with ID starting with 87c6b411dfe277d9ab669c640478cf0b6070af5d629655273a23697ab8ba0434 not found: ID does not exist" Feb 02 06:59:59 crc kubenswrapper[4842]: I0202 06:59:59.451189 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59990591-2248-489b-bac2-e7cab22482f8" path="/var/lib/kubelet/pods/59990591-2248-489b-bac2-e7cab22482f8/volumes" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.132263 4842 generic.go:334] "Generic (PLEG): container finished" podID="bb4e0f2b-3826-4669-8732-05eb885adfe5" containerID="53158c113d43cbb2bb783b307208a0f826a90fb0a10ad9e93767be3d50edb5ea" exitCode=0 Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.132326 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" event={"ID":"bb4e0f2b-3826-4669-8732-05eb885adfe5","Type":"ContainerDied","Data":"53158c113d43cbb2bb783b307208a0f826a90fb0a10ad9e93767be3d50edb5ea"} Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.195140 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn"] Feb 02 07:00:00 crc kubenswrapper[4842]: E0202 07:00:00.199855 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59990591-2248-489b-bac2-e7cab22482f8" containerName="console" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.199903 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="59990591-2248-489b-bac2-e7cab22482f8" containerName="console" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.200188 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="59990591-2248-489b-bac2-e7cab22482f8" containerName="console" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.201254 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.204877 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.205953 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn"] Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.208604 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.350047 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da36ad95-63f3-4cfb-8da7-96b730ccc79b-config-volume\") pod \"collect-profiles-29500260-8hlgn\" (UID: \"da36ad95-63f3-4cfb-8da7-96b730ccc79b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.350657 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da36ad95-63f3-4cfb-8da7-96b730ccc79b-secret-volume\") pod \"collect-profiles-29500260-8hlgn\" (UID: \"da36ad95-63f3-4cfb-8da7-96b730ccc79b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.350726 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl8n5\" (UniqueName: \"kubernetes.io/projected/da36ad95-63f3-4cfb-8da7-96b730ccc79b-kube-api-access-rl8n5\") pod \"collect-profiles-29500260-8hlgn\" (UID: \"da36ad95-63f3-4cfb-8da7-96b730ccc79b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.451368 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da36ad95-63f3-4cfb-8da7-96b730ccc79b-secret-volume\") pod \"collect-profiles-29500260-8hlgn\" (UID: \"da36ad95-63f3-4cfb-8da7-96b730ccc79b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.451412 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl8n5\" (UniqueName: \"kubernetes.io/projected/da36ad95-63f3-4cfb-8da7-96b730ccc79b-kube-api-access-rl8n5\") pod \"collect-profiles-29500260-8hlgn\" (UID: \"da36ad95-63f3-4cfb-8da7-96b730ccc79b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.451435 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da36ad95-63f3-4cfb-8da7-96b730ccc79b-config-volume\") pod \"collect-profiles-29500260-8hlgn\" (UID: \"da36ad95-63f3-4cfb-8da7-96b730ccc79b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.452467 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da36ad95-63f3-4cfb-8da7-96b730ccc79b-config-volume\") pod \"collect-profiles-29500260-8hlgn\" (UID: \"da36ad95-63f3-4cfb-8da7-96b730ccc79b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.459505 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da36ad95-63f3-4cfb-8da7-96b730ccc79b-secret-volume\") pod \"collect-profiles-29500260-8hlgn\" (UID: \"da36ad95-63f3-4cfb-8da7-96b730ccc79b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.483030 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl8n5\" (UniqueName: \"kubernetes.io/projected/da36ad95-63f3-4cfb-8da7-96b730ccc79b-kube-api-access-rl8n5\") pod \"collect-profiles-29500260-8hlgn\" (UID: \"da36ad95-63f3-4cfb-8da7-96b730ccc79b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.560524 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.824864 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn"] Feb 02 07:00:00 crc kubenswrapper[4842]: W0202 07:00:00.836442 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda36ad95_63f3_4cfb_8da7_96b730ccc79b.slice/crio-fdce6edc635982c7b3d799c8647b640c3683122c52bfad2ac4bc2368d96f8f3a WatchSource:0}: Error finding container fdce6edc635982c7b3d799c8647b640c3683122c52bfad2ac4bc2368d96f8f3a: Status 404 returned error can't find the container with id fdce6edc635982c7b3d799c8647b640c3683122c52bfad2ac4bc2368d96f8f3a Feb 02 07:00:01 crc kubenswrapper[4842]: I0202 07:00:01.141658 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" event={"ID":"da36ad95-63f3-4cfb-8da7-96b730ccc79b","Type":"ContainerStarted","Data":"dce0962765d9bf38cd06dbb96cb12282f1586c08a47e1dfbc418a62406ef2e49"} Feb 02 07:00:01 crc kubenswrapper[4842]: I0202 07:00:01.141717 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" event={"ID":"da36ad95-63f3-4cfb-8da7-96b730ccc79b","Type":"ContainerStarted","Data":"fdce6edc635982c7b3d799c8647b640c3683122c52bfad2ac4bc2368d96f8f3a"} Feb 02 07:00:01 crc kubenswrapper[4842]: I0202 07:00:01.147143 4842 generic.go:334] "Generic (PLEG): container finished" podID="bb4e0f2b-3826-4669-8732-05eb885adfe5" containerID="01cc3645a9de560ef76c7015efc21ddb5ce809fbe1708e54bbcf1d0de5f30d75" exitCode=0 Feb 02 07:00:01 crc kubenswrapper[4842]: I0202 07:00:01.147184 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" event={"ID":"bb4e0f2b-3826-4669-8732-05eb885adfe5","Type":"ContainerDied","Data":"01cc3645a9de560ef76c7015efc21ddb5ce809fbe1708e54bbcf1d0de5f30d75"} Feb 02 07:00:01 crc kubenswrapper[4842]: I0202 07:00:01.166126 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" podStartSLOduration=1.166106288 podStartE2EDuration="1.166106288s" podCreationTimestamp="2026-02-02 07:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:00:01.161575466 +0000 UTC m=+826.538843408" watchObservedRunningTime="2026-02-02 07:00:01.166106288 +0000 UTC m=+826.543374210" Feb 02 07:00:02 crc kubenswrapper[4842]: I0202 07:00:02.153970 4842 generic.go:334] "Generic (PLEG): container finished" podID="da36ad95-63f3-4cfb-8da7-96b730ccc79b" containerID="dce0962765d9bf38cd06dbb96cb12282f1586c08a47e1dfbc418a62406ef2e49" exitCode=0 Feb 02 07:00:02 crc kubenswrapper[4842]: I0202 07:00:02.154061 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" event={"ID":"da36ad95-63f3-4cfb-8da7-96b730ccc79b","Type":"ContainerDied","Data":"dce0962765d9bf38cd06dbb96cb12282f1586c08a47e1dfbc418a62406ef2e49"} Feb 02 07:00:02 crc kubenswrapper[4842]: I0202 07:00:02.460124 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" Feb 02 07:00:02 crc kubenswrapper[4842]: I0202 07:00:02.581477 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgn7x\" (UniqueName: \"kubernetes.io/projected/bb4e0f2b-3826-4669-8732-05eb885adfe5-kube-api-access-zgn7x\") pod \"bb4e0f2b-3826-4669-8732-05eb885adfe5\" (UID: \"bb4e0f2b-3826-4669-8732-05eb885adfe5\") " Feb 02 07:00:02 crc kubenswrapper[4842]: I0202 07:00:02.582872 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb4e0f2b-3826-4669-8732-05eb885adfe5-bundle\") pod \"bb4e0f2b-3826-4669-8732-05eb885adfe5\" (UID: \"bb4e0f2b-3826-4669-8732-05eb885adfe5\") " Feb 02 07:00:02 crc kubenswrapper[4842]: I0202 07:00:02.582988 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb4e0f2b-3826-4669-8732-05eb885adfe5-util\") pod \"bb4e0f2b-3826-4669-8732-05eb885adfe5\" (UID: \"bb4e0f2b-3826-4669-8732-05eb885adfe5\") " Feb 02 07:00:02 crc kubenswrapper[4842]: I0202 07:00:02.584507 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb4e0f2b-3826-4669-8732-05eb885adfe5-bundle" (OuterVolumeSpecName: "bundle") pod "bb4e0f2b-3826-4669-8732-05eb885adfe5" (UID: "bb4e0f2b-3826-4669-8732-05eb885adfe5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:00:02 crc kubenswrapper[4842]: I0202 07:00:02.590105 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb4e0f2b-3826-4669-8732-05eb885adfe5-kube-api-access-zgn7x" (OuterVolumeSpecName: "kube-api-access-zgn7x") pod "bb4e0f2b-3826-4669-8732-05eb885adfe5" (UID: "bb4e0f2b-3826-4669-8732-05eb885adfe5"). InnerVolumeSpecName "kube-api-access-zgn7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:00:02 crc kubenswrapper[4842]: I0202 07:00:02.603726 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb4e0f2b-3826-4669-8732-05eb885adfe5-util" (OuterVolumeSpecName: "util") pod "bb4e0f2b-3826-4669-8732-05eb885adfe5" (UID: "bb4e0f2b-3826-4669-8732-05eb885adfe5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:00:02 crc kubenswrapper[4842]: I0202 07:00:02.685446 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgn7x\" (UniqueName: \"kubernetes.io/projected/bb4e0f2b-3826-4669-8732-05eb885adfe5-kube-api-access-zgn7x\") on node \"crc\" DevicePath \"\"" Feb 02 07:00:02 crc kubenswrapper[4842]: I0202 07:00:02.685522 4842 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb4e0f2b-3826-4669-8732-05eb885adfe5-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:00:02 crc kubenswrapper[4842]: I0202 07:00:02.685544 4842 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb4e0f2b-3826-4669-8732-05eb885adfe5-util\") on node \"crc\" DevicePath \"\"" Feb 02 07:00:03 crc kubenswrapper[4842]: I0202 07:00:03.169811 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" Feb 02 07:00:03 crc kubenswrapper[4842]: I0202 07:00:03.169804 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" event={"ID":"bb4e0f2b-3826-4669-8732-05eb885adfe5","Type":"ContainerDied","Data":"6730659c3a7373b1a89b3d0bb6b20152699850dfd1a17dcbce4ec3f7dadec6b4"} Feb 02 07:00:03 crc kubenswrapper[4842]: I0202 07:00:03.170045 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6730659c3a7373b1a89b3d0bb6b20152699850dfd1a17dcbce4ec3f7dadec6b4" Feb 02 07:00:03 crc kubenswrapper[4842]: I0202 07:00:03.518730 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" Feb 02 07:00:03 crc kubenswrapper[4842]: I0202 07:00:03.701508 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da36ad95-63f3-4cfb-8da7-96b730ccc79b-secret-volume\") pod \"da36ad95-63f3-4cfb-8da7-96b730ccc79b\" (UID: \"da36ad95-63f3-4cfb-8da7-96b730ccc79b\") " Feb 02 07:00:03 crc kubenswrapper[4842]: I0202 07:00:03.701619 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da36ad95-63f3-4cfb-8da7-96b730ccc79b-config-volume\") pod \"da36ad95-63f3-4cfb-8da7-96b730ccc79b\" (UID: \"da36ad95-63f3-4cfb-8da7-96b730ccc79b\") " Feb 02 07:00:03 crc kubenswrapper[4842]: I0202 07:00:03.701659 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl8n5\" (UniqueName: \"kubernetes.io/projected/da36ad95-63f3-4cfb-8da7-96b730ccc79b-kube-api-access-rl8n5\") pod \"da36ad95-63f3-4cfb-8da7-96b730ccc79b\" (UID: \"da36ad95-63f3-4cfb-8da7-96b730ccc79b\") " Feb 02 07:00:03 crc kubenswrapper[4842]: I0202 07:00:03.702577 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da36ad95-63f3-4cfb-8da7-96b730ccc79b-config-volume" (OuterVolumeSpecName: "config-volume") pod "da36ad95-63f3-4cfb-8da7-96b730ccc79b" (UID: "da36ad95-63f3-4cfb-8da7-96b730ccc79b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:00:03 crc kubenswrapper[4842]: I0202 07:00:03.707517 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da36ad95-63f3-4cfb-8da7-96b730ccc79b-kube-api-access-rl8n5" (OuterVolumeSpecName: "kube-api-access-rl8n5") pod "da36ad95-63f3-4cfb-8da7-96b730ccc79b" (UID: "da36ad95-63f3-4cfb-8da7-96b730ccc79b"). InnerVolumeSpecName "kube-api-access-rl8n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:00:03 crc kubenswrapper[4842]: I0202 07:00:03.708016 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da36ad95-63f3-4cfb-8da7-96b730ccc79b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "da36ad95-63f3-4cfb-8da7-96b730ccc79b" (UID: "da36ad95-63f3-4cfb-8da7-96b730ccc79b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:00:03 crc kubenswrapper[4842]: I0202 07:00:03.803847 4842 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da36ad95-63f3-4cfb-8da7-96b730ccc79b-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 07:00:03 crc kubenswrapper[4842]: I0202 07:00:03.803904 4842 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da36ad95-63f3-4cfb-8da7-96b730ccc79b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 07:00:03 crc kubenswrapper[4842]: I0202 07:00:03.803926 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl8n5\" (UniqueName: \"kubernetes.io/projected/da36ad95-63f3-4cfb-8da7-96b730ccc79b-kube-api-access-rl8n5\") on node \"crc\" DevicePath \"\"" Feb 02 07:00:04 crc kubenswrapper[4842]: I0202 07:00:04.185729 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" Feb 02 07:00:04 crc kubenswrapper[4842]: I0202 07:00:04.185736 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" event={"ID":"da36ad95-63f3-4cfb-8da7-96b730ccc79b","Type":"ContainerDied","Data":"fdce6edc635982c7b3d799c8647b640c3683122c52bfad2ac4bc2368d96f8f3a"} Feb 02 07:00:04 crc kubenswrapper[4842]: I0202 07:00:04.185926 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdce6edc635982c7b3d799c8647b640c3683122c52bfad2ac4bc2368d96f8f3a" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.277412 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc"] Feb 02 07:00:11 crc kubenswrapper[4842]: E0202 07:00:11.278122 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb4e0f2b-3826-4669-8732-05eb885adfe5" containerName="util" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.278137 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb4e0f2b-3826-4669-8732-05eb885adfe5" containerName="util" Feb 02 07:00:11 crc kubenswrapper[4842]: E0202 07:00:11.278158 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb4e0f2b-3826-4669-8732-05eb885adfe5" containerName="extract" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.278167 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb4e0f2b-3826-4669-8732-05eb885adfe5" containerName="extract" Feb 02 07:00:11 crc kubenswrapper[4842]: E0202 07:00:11.278179 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da36ad95-63f3-4cfb-8da7-96b730ccc79b" containerName="collect-profiles" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.278187 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="da36ad95-63f3-4cfb-8da7-96b730ccc79b" containerName="collect-profiles" Feb 02 07:00:11 crc kubenswrapper[4842]: E0202 07:00:11.278205 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb4e0f2b-3826-4669-8732-05eb885adfe5" containerName="pull" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.278234 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb4e0f2b-3826-4669-8732-05eb885adfe5" containerName="pull" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.278345 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb4e0f2b-3826-4669-8732-05eb885adfe5" containerName="extract" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.278365 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="da36ad95-63f3-4cfb-8da7-96b730ccc79b" containerName="collect-profiles" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.278763 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.280641 4842 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-8hchk" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.282706 4842 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.283108 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.283329 4842 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.292823 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.316227 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc"] Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.406160 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b3b00acd-6687-457f-8744-7057f840e5bd-webhook-cert\") pod \"metallb-operator-controller-manager-74749cc964-2p2rc\" (UID: \"b3b00acd-6687-457f-8744-7057f840e5bd\") " pod="metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.406453 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b3b00acd-6687-457f-8744-7057f840e5bd-apiservice-cert\") pod \"metallb-operator-controller-manager-74749cc964-2p2rc\" (UID: \"b3b00acd-6687-457f-8744-7057f840e5bd\") " pod="metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.406602 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4mq5\" (UniqueName: \"kubernetes.io/projected/b3b00acd-6687-457f-8744-7057f840e5bd-kube-api-access-b4mq5\") pod \"metallb-operator-controller-manager-74749cc964-2p2rc\" (UID: \"b3b00acd-6687-457f-8744-7057f840e5bd\") " pod="metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.507719 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b3b00acd-6687-457f-8744-7057f840e5bd-webhook-cert\") pod \"metallb-operator-controller-manager-74749cc964-2p2rc\" (UID: \"b3b00acd-6687-457f-8744-7057f840e5bd\") " pod="metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.507770 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b3b00acd-6687-457f-8744-7057f840e5bd-apiservice-cert\") pod \"metallb-operator-controller-manager-74749cc964-2p2rc\" (UID: \"b3b00acd-6687-457f-8744-7057f840e5bd\") " pod="metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.507802 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4mq5\" (UniqueName: \"kubernetes.io/projected/b3b00acd-6687-457f-8744-7057f840e5bd-kube-api-access-b4mq5\") pod \"metallb-operator-controller-manager-74749cc964-2p2rc\" (UID: \"b3b00acd-6687-457f-8744-7057f840e5bd\") " pod="metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.514189 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b3b00acd-6687-457f-8744-7057f840e5bd-apiservice-cert\") pod \"metallb-operator-controller-manager-74749cc964-2p2rc\" (UID: \"b3b00acd-6687-457f-8744-7057f840e5bd\") " pod="metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.515915 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b3b00acd-6687-457f-8744-7057f840e5bd-webhook-cert\") pod \"metallb-operator-controller-manager-74749cc964-2p2rc\" (UID: \"b3b00acd-6687-457f-8744-7057f840e5bd\") " pod="metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.531166 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9"] Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.532175 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.534286 4842 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.534581 4842 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-6prb5" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.535200 4842 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.542798 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4mq5\" (UniqueName: \"kubernetes.io/projected/b3b00acd-6687-457f-8744-7057f840e5bd-kube-api-access-b4mq5\") pod \"metallb-operator-controller-manager-74749cc964-2p2rc\" (UID: \"b3b00acd-6687-457f-8744-7057f840e5bd\") " pod="metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.582535 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9"] Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.598244 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.710558 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/793714c2-9e47-4e82-a201-e2e8ac9d7bff-apiservice-cert\") pod \"metallb-operator-webhook-server-7f569b8d8f-wvbf9\" (UID: \"793714c2-9e47-4e82-a201-e2e8ac9d7bff\") " pod="metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.710851 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/793714c2-9e47-4e82-a201-e2e8ac9d7bff-webhook-cert\") pod \"metallb-operator-webhook-server-7f569b8d8f-wvbf9\" (UID: \"793714c2-9e47-4e82-a201-e2e8ac9d7bff\") " pod="metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.711007 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5tzz\" (UniqueName: \"kubernetes.io/projected/793714c2-9e47-4e82-a201-e2e8ac9d7bff-kube-api-access-g5tzz\") pod \"metallb-operator-webhook-server-7f569b8d8f-wvbf9\" (UID: \"793714c2-9e47-4e82-a201-e2e8ac9d7bff\") " pod="metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.812147 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/793714c2-9e47-4e82-a201-e2e8ac9d7bff-webhook-cert\") pod \"metallb-operator-webhook-server-7f569b8d8f-wvbf9\" (UID: \"793714c2-9e47-4e82-a201-e2e8ac9d7bff\") " pod="metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.812236 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5tzz\" (UniqueName: \"kubernetes.io/projected/793714c2-9e47-4e82-a201-e2e8ac9d7bff-kube-api-access-g5tzz\") pod \"metallb-operator-webhook-server-7f569b8d8f-wvbf9\" (UID: \"793714c2-9e47-4e82-a201-e2e8ac9d7bff\") " pod="metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.812258 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/793714c2-9e47-4e82-a201-e2e8ac9d7bff-apiservice-cert\") pod \"metallb-operator-webhook-server-7f569b8d8f-wvbf9\" (UID: \"793714c2-9e47-4e82-a201-e2e8ac9d7bff\") " pod="metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.817890 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/793714c2-9e47-4e82-a201-e2e8ac9d7bff-apiservice-cert\") pod \"metallb-operator-webhook-server-7f569b8d8f-wvbf9\" (UID: \"793714c2-9e47-4e82-a201-e2e8ac9d7bff\") " pod="metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.828017 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5tzz\" (UniqueName: \"kubernetes.io/projected/793714c2-9e47-4e82-a201-e2e8ac9d7bff-kube-api-access-g5tzz\") pod \"metallb-operator-webhook-server-7f569b8d8f-wvbf9\" (UID: \"793714c2-9e47-4e82-a201-e2e8ac9d7bff\") " pod="metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.828479 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/793714c2-9e47-4e82-a201-e2e8ac9d7bff-webhook-cert\") pod \"metallb-operator-webhook-server-7f569b8d8f-wvbf9\" (UID: \"793714c2-9e47-4e82-a201-e2e8ac9d7bff\") " pod="metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.874528 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.911946 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc"] Feb 02 07:00:11 crc kubenswrapper[4842]: W0202 07:00:11.921153 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3b00acd_6687_457f_8744_7057f840e5bd.slice/crio-7e0395fd61c3381217732e5cd4cc388e5494d9adeb18d4e7834155efed3ce7ee WatchSource:0}: Error finding container 7e0395fd61c3381217732e5cd4cc388e5494d9adeb18d4e7834155efed3ce7ee: Status 404 returned error can't find the container with id 7e0395fd61c3381217732e5cd4cc388e5494d9adeb18d4e7834155efed3ce7ee Feb 02 07:00:12 crc kubenswrapper[4842]: I0202 07:00:12.118672 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9"] Feb 02 07:00:12 crc kubenswrapper[4842]: W0202 07:00:12.126452 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod793714c2_9e47_4e82_a201_e2e8ac9d7bff.slice/crio-bfe70952017243c23de28187630e2460d3f780b1e7d3ba9e9a3934900eb2ecae WatchSource:0}: Error finding container bfe70952017243c23de28187630e2460d3f780b1e7d3ba9e9a3934900eb2ecae: Status 404 returned error can't find the container with id bfe70952017243c23de28187630e2460d3f780b1e7d3ba9e9a3934900eb2ecae Feb 02 07:00:12 crc kubenswrapper[4842]: I0202 07:00:12.233794 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9" event={"ID":"793714c2-9e47-4e82-a201-e2e8ac9d7bff","Type":"ContainerStarted","Data":"bfe70952017243c23de28187630e2460d3f780b1e7d3ba9e9a3934900eb2ecae"} Feb 02 07:00:12 crc kubenswrapper[4842]: I0202 07:00:12.234721 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc" event={"ID":"b3b00acd-6687-457f-8744-7057f840e5bd","Type":"ContainerStarted","Data":"7e0395fd61c3381217732e5cd4cc388e5494d9adeb18d4e7834155efed3ce7ee"} Feb 02 07:00:16 crc kubenswrapper[4842]: I0202 07:00:16.263964 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc" event={"ID":"b3b00acd-6687-457f-8744-7057f840e5bd","Type":"ContainerStarted","Data":"b4b36f0fab828459cd9384225a61edafc93a85b49a1286f4b49de5a26b26d8d6"} Feb 02 07:00:16 crc kubenswrapper[4842]: I0202 07:00:16.264910 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc" Feb 02 07:00:16 crc kubenswrapper[4842]: I0202 07:00:16.266417 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9" event={"ID":"793714c2-9e47-4e82-a201-e2e8ac9d7bff","Type":"ContainerStarted","Data":"1bdaceb2b2fc4d1e07a515c090032882ebac5945f6e75210bddcaedb9529a0da"} Feb 02 07:00:16 crc kubenswrapper[4842]: I0202 07:00:16.266685 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9" Feb 02 07:00:16 crc kubenswrapper[4842]: I0202 07:00:16.302814 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc" podStartSLOduration=1.397689371 podStartE2EDuration="5.302780732s" podCreationTimestamp="2026-02-02 07:00:11 +0000 UTC" firstStartedPulling="2026-02-02 07:00:11.923063453 +0000 UTC m=+837.300331355" lastFinishedPulling="2026-02-02 07:00:15.828154804 +0000 UTC m=+841.205422716" observedRunningTime="2026-02-02 07:00:16.294859087 +0000 UTC m=+841.672127059" watchObservedRunningTime="2026-02-02 07:00:16.302780732 +0000 UTC m=+841.680048684" Feb 02 07:00:31 crc kubenswrapper[4842]: I0202 07:00:31.879978 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9" Feb 02 07:00:31 crc kubenswrapper[4842]: I0202 07:00:31.918590 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9" podStartSLOduration=17.147146949 podStartE2EDuration="20.918561729s" podCreationTimestamp="2026-02-02 07:00:11 +0000 UTC" firstStartedPulling="2026-02-02 07:00:12.129405954 +0000 UTC m=+837.506673866" lastFinishedPulling="2026-02-02 07:00:15.900820734 +0000 UTC m=+841.278088646" observedRunningTime="2026-02-02 07:00:16.334512813 +0000 UTC m=+841.711780735" watchObservedRunningTime="2026-02-02 07:00:31.918561729 +0000 UTC m=+857.295829671" Feb 02 07:00:42 crc kubenswrapper[4842]: I0202 07:00:42.146473 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:00:42 crc kubenswrapper[4842]: I0202 07:00:42.147064 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:00:51 crc kubenswrapper[4842]: I0202 07:00:51.602138 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.354287 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-fvmtq"] Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.356823 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.358680 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.358953 4842 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.361013 4842 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-lg845" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.365899 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75"] Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.366841 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.371496 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75"] Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.371547 4842 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.460763 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-74hmd"] Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.461545 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-74hmd" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.463416 4842 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.463734 4842 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.464103 4842 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-4kbng" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.464615 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.475619 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-7h9kp"] Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.476433 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-7h9kp" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.485763 4842 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.505719 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/79110fb7-d2a2-4330-ab4b-d717a7b943e6-metrics\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.505776 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/79110fb7-d2a2-4330-ab4b-d717a7b943e6-reloader\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.505792 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79110fb7-d2a2-4330-ab4b-d717a7b943e6-metrics-certs\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.505831 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/79110fb7-d2a2-4330-ab4b-d717a7b943e6-frr-startup\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.505860 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pklwf\" (UniqueName: \"kubernetes.io/projected/412f3125-792a-4cb4-858e-e0376903066a-kube-api-access-pklwf\") pod \"frr-k8s-webhook-server-7df86c4f6c-ksx75\" (UID: \"412f3125-792a-4cb4-858e-e0376903066a\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.505877 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5znk\" (UniqueName: \"kubernetes.io/projected/79110fb7-d2a2-4330-ab4b-d717a7b943e6-kube-api-access-c5znk\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.505894 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/412f3125-792a-4cb4-858e-e0376903066a-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-ksx75\" (UID: \"412f3125-792a-4cb4-858e-e0376903066a\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.505910 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/79110fb7-d2a2-4330-ab4b-d717a7b943e6-frr-sockets\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.505924 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/79110fb7-d2a2-4330-ab4b-d717a7b943e6-frr-conf\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.506267 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-7h9kp"] Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.606728 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stm2c\" (UniqueName: \"kubernetes.io/projected/3016a0a1-abd6-486a-af0b-cf4c7b8db672-kube-api-access-stm2c\") pod \"speaker-74hmd\" (UID: \"3016a0a1-abd6-486a-af0b-cf4c7b8db672\") " pod="metallb-system/speaker-74hmd" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.606774 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/79110fb7-d2a2-4330-ab4b-d717a7b943e6-metrics\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.606799 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/79110fb7-d2a2-4330-ab4b-d717a7b943e6-reloader\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.606813 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79110fb7-d2a2-4330-ab4b-d717a7b943e6-metrics-certs\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.606842 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3016a0a1-abd6-486a-af0b-cf4c7b8db672-memberlist\") pod \"speaker-74hmd\" (UID: \"3016a0a1-abd6-486a-af0b-cf4c7b8db672\") " pod="metallb-system/speaker-74hmd" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.606865 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3016a0a1-abd6-486a-af0b-cf4c7b8db672-metrics-certs\") pod \"speaker-74hmd\" (UID: \"3016a0a1-abd6-486a-af0b-cf4c7b8db672\") " pod="metallb-system/speaker-74hmd" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.606886 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/79110fb7-d2a2-4330-ab4b-d717a7b943e6-frr-startup\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.606903 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdxnw\" (UniqueName: \"kubernetes.io/projected/890c2fc6-f70e-47e4-8578-908ec14d719f-kube-api-access-mdxnw\") pod \"controller-6968d8fdc4-7h9kp\" (UID: \"890c2fc6-f70e-47e4-8578-908ec14d719f\") " pod="metallb-system/controller-6968d8fdc4-7h9kp" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.606918 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3016a0a1-abd6-486a-af0b-cf4c7b8db672-metallb-excludel2\") pod \"speaker-74hmd\" (UID: \"3016a0a1-abd6-486a-af0b-cf4c7b8db672\") " pod="metallb-system/speaker-74hmd" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.606939 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/890c2fc6-f70e-47e4-8578-908ec14d719f-metrics-certs\") pod \"controller-6968d8fdc4-7h9kp\" (UID: \"890c2fc6-f70e-47e4-8578-908ec14d719f\") " pod="metallb-system/controller-6968d8fdc4-7h9kp" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.606962 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pklwf\" (UniqueName: \"kubernetes.io/projected/412f3125-792a-4cb4-858e-e0376903066a-kube-api-access-pklwf\") pod \"frr-k8s-webhook-server-7df86c4f6c-ksx75\" (UID: \"412f3125-792a-4cb4-858e-e0376903066a\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.606977 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5znk\" (UniqueName: \"kubernetes.io/projected/79110fb7-d2a2-4330-ab4b-d717a7b943e6-kube-api-access-c5znk\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.607000 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/412f3125-792a-4cb4-858e-e0376903066a-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-ksx75\" (UID: \"412f3125-792a-4cb4-858e-e0376903066a\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.607018 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/890c2fc6-f70e-47e4-8578-908ec14d719f-cert\") pod \"controller-6968d8fdc4-7h9kp\" (UID: \"890c2fc6-f70e-47e4-8578-908ec14d719f\") " pod="metallb-system/controller-6968d8fdc4-7h9kp" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.607039 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/79110fb7-d2a2-4330-ab4b-d717a7b943e6-frr-sockets\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.607058 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/79110fb7-d2a2-4330-ab4b-d717a7b943e6-frr-conf\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.607478 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/79110fb7-d2a2-4330-ab4b-d717a7b943e6-frr-conf\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.607642 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/79110fb7-d2a2-4330-ab4b-d717a7b943e6-metrics\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: E0202 07:00:52.607876 4842 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 02 07:00:52 crc kubenswrapper[4842]: E0202 07:00:52.607980 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/412f3125-792a-4cb4-858e-e0376903066a-cert podName:412f3125-792a-4cb4-858e-e0376903066a nodeName:}" failed. No retries permitted until 2026-02-02 07:00:53.107963465 +0000 UTC m=+878.485231377 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/412f3125-792a-4cb4-858e-e0376903066a-cert") pod "frr-k8s-webhook-server-7df86c4f6c-ksx75" (UID: "412f3125-792a-4cb4-858e-e0376903066a") : secret "frr-k8s-webhook-server-cert" not found Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.608381 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/79110fb7-d2a2-4330-ab4b-d717a7b943e6-frr-sockets\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.608431 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/79110fb7-d2a2-4330-ab4b-d717a7b943e6-reloader\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.609754 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/79110fb7-d2a2-4330-ab4b-d717a7b943e6-frr-startup\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.612835 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79110fb7-d2a2-4330-ab4b-d717a7b943e6-metrics-certs\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.625667 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5znk\" (UniqueName: \"kubernetes.io/projected/79110fb7-d2a2-4330-ab4b-d717a7b943e6-kube-api-access-c5znk\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.626659 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pklwf\" (UniqueName: \"kubernetes.io/projected/412f3125-792a-4cb4-858e-e0376903066a-kube-api-access-pklwf\") pod \"frr-k8s-webhook-server-7df86c4f6c-ksx75\" (UID: \"412f3125-792a-4cb4-858e-e0376903066a\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.688790 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.707833 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/890c2fc6-f70e-47e4-8578-908ec14d719f-cert\") pod \"controller-6968d8fdc4-7h9kp\" (UID: \"890c2fc6-f70e-47e4-8578-908ec14d719f\") " pod="metallb-system/controller-6968d8fdc4-7h9kp" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.707889 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stm2c\" (UniqueName: \"kubernetes.io/projected/3016a0a1-abd6-486a-af0b-cf4c7b8db672-kube-api-access-stm2c\") pod \"speaker-74hmd\" (UID: \"3016a0a1-abd6-486a-af0b-cf4c7b8db672\") " pod="metallb-system/speaker-74hmd" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.707931 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3016a0a1-abd6-486a-af0b-cf4c7b8db672-memberlist\") pod \"speaker-74hmd\" (UID: \"3016a0a1-abd6-486a-af0b-cf4c7b8db672\") " pod="metallb-system/speaker-74hmd" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.707953 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3016a0a1-abd6-486a-af0b-cf4c7b8db672-metrics-certs\") pod \"speaker-74hmd\" (UID: \"3016a0a1-abd6-486a-af0b-cf4c7b8db672\") " pod="metallb-system/speaker-74hmd" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.707977 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdxnw\" (UniqueName: \"kubernetes.io/projected/890c2fc6-f70e-47e4-8578-908ec14d719f-kube-api-access-mdxnw\") pod \"controller-6968d8fdc4-7h9kp\" (UID: \"890c2fc6-f70e-47e4-8578-908ec14d719f\") " pod="metallb-system/controller-6968d8fdc4-7h9kp" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.707990 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3016a0a1-abd6-486a-af0b-cf4c7b8db672-metallb-excludel2\") pod \"speaker-74hmd\" (UID: \"3016a0a1-abd6-486a-af0b-cf4c7b8db672\") " pod="metallb-system/speaker-74hmd" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.708010 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/890c2fc6-f70e-47e4-8578-908ec14d719f-metrics-certs\") pod \"controller-6968d8fdc4-7h9kp\" (UID: \"890c2fc6-f70e-47e4-8578-908ec14d719f\") " pod="metallb-system/controller-6968d8fdc4-7h9kp" Feb 02 07:00:52 crc kubenswrapper[4842]: E0202 07:00:52.708458 4842 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 02 07:00:52 crc kubenswrapper[4842]: E0202 07:00:52.708542 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3016a0a1-abd6-486a-af0b-cf4c7b8db672-memberlist podName:3016a0a1-abd6-486a-af0b-cf4c7b8db672 nodeName:}" failed. No retries permitted until 2026-02-02 07:00:53.208519352 +0000 UTC m=+878.585787274 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/3016a0a1-abd6-486a-af0b-cf4c7b8db672-memberlist") pod "speaker-74hmd" (UID: "3016a0a1-abd6-486a-af0b-cf4c7b8db672") : secret "metallb-memberlist" not found Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.709400 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3016a0a1-abd6-486a-af0b-cf4c7b8db672-metallb-excludel2\") pod \"speaker-74hmd\" (UID: \"3016a0a1-abd6-486a-af0b-cf4c7b8db672\") " pod="metallb-system/speaker-74hmd" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.713438 4842 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.713774 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/890c2fc6-f70e-47e4-8578-908ec14d719f-metrics-certs\") pod \"controller-6968d8fdc4-7h9kp\" (UID: \"890c2fc6-f70e-47e4-8578-908ec14d719f\") " pod="metallb-system/controller-6968d8fdc4-7h9kp" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.715321 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3016a0a1-abd6-486a-af0b-cf4c7b8db672-metrics-certs\") pod \"speaker-74hmd\" (UID: \"3016a0a1-abd6-486a-af0b-cf4c7b8db672\") " pod="metallb-system/speaker-74hmd" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.726774 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/890c2fc6-f70e-47e4-8578-908ec14d719f-cert\") pod \"controller-6968d8fdc4-7h9kp\" (UID: \"890c2fc6-f70e-47e4-8578-908ec14d719f\") " pod="metallb-system/controller-6968d8fdc4-7h9kp" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.733654 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdxnw\" (UniqueName: \"kubernetes.io/projected/890c2fc6-f70e-47e4-8578-908ec14d719f-kube-api-access-mdxnw\") pod \"controller-6968d8fdc4-7h9kp\" (UID: \"890c2fc6-f70e-47e4-8578-908ec14d719f\") " pod="metallb-system/controller-6968d8fdc4-7h9kp" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.735737 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stm2c\" (UniqueName: \"kubernetes.io/projected/3016a0a1-abd6-486a-af0b-cf4c7b8db672-kube-api-access-stm2c\") pod \"speaker-74hmd\" (UID: \"3016a0a1-abd6-486a-af0b-cf4c7b8db672\") " pod="metallb-system/speaker-74hmd" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.800258 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-7h9kp" Feb 02 07:00:53 crc kubenswrapper[4842]: I0202 07:00:53.009777 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-7h9kp"] Feb 02 07:00:53 crc kubenswrapper[4842]: I0202 07:00:53.113172 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/412f3125-792a-4cb4-858e-e0376903066a-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-ksx75\" (UID: \"412f3125-792a-4cb4-858e-e0376903066a\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75" Feb 02 07:00:53 crc kubenswrapper[4842]: I0202 07:00:53.119940 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/412f3125-792a-4cb4-858e-e0376903066a-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-ksx75\" (UID: \"412f3125-792a-4cb4-858e-e0376903066a\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75" Feb 02 07:00:53 crc kubenswrapper[4842]: I0202 07:00:53.214169 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3016a0a1-abd6-486a-af0b-cf4c7b8db672-memberlist\") pod \"speaker-74hmd\" (UID: \"3016a0a1-abd6-486a-af0b-cf4c7b8db672\") " pod="metallb-system/speaker-74hmd" Feb 02 07:00:53 crc kubenswrapper[4842]: E0202 07:00:53.214391 4842 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 02 07:00:53 crc kubenswrapper[4842]: E0202 07:00:53.214444 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3016a0a1-abd6-486a-af0b-cf4c7b8db672-memberlist podName:3016a0a1-abd6-486a-af0b-cf4c7b8db672 nodeName:}" failed. No retries permitted until 2026-02-02 07:00:54.214427034 +0000 UTC m=+879.591694956 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/3016a0a1-abd6-486a-af0b-cf4c7b8db672-memberlist") pod "speaker-74hmd" (UID: "3016a0a1-abd6-486a-af0b-cf4c7b8db672") : secret "metallb-memberlist" not found Feb 02 07:00:53 crc kubenswrapper[4842]: I0202 07:00:53.299196 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75" Feb 02 07:00:53 crc kubenswrapper[4842]: I0202 07:00:53.521044 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-7h9kp" event={"ID":"890c2fc6-f70e-47e4-8578-908ec14d719f","Type":"ContainerStarted","Data":"ec2bbc8fc0ebee72e24fff4d4806a8261d5eecdfd92fdf5f95b216de757c206b"} Feb 02 07:00:53 crc kubenswrapper[4842]: I0202 07:00:53.521459 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-7h9kp" event={"ID":"890c2fc6-f70e-47e4-8578-908ec14d719f","Type":"ContainerStarted","Data":"ee504b0714bc44442a0347785bd5db2a0c7c096bf32556f9f7493aa1ca07470b"} Feb 02 07:00:53 crc kubenswrapper[4842]: I0202 07:00:53.521475 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-7h9kp" event={"ID":"890c2fc6-f70e-47e4-8578-908ec14d719f","Type":"ContainerStarted","Data":"f5d2eed0060f1351d0e20bc0136139b2acdb5fa7d90989467bcba3d37d8c9991"} Feb 02 07:00:53 crc kubenswrapper[4842]: I0202 07:00:53.521531 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-7h9kp" Feb 02 07:00:53 crc kubenswrapper[4842]: I0202 07:00:53.522889 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fvmtq" event={"ID":"79110fb7-d2a2-4330-ab4b-d717a7b943e6","Type":"ContainerStarted","Data":"33dd9792e3bd6e4f15e12a23878d10f12b6c9602aceb64676a77e4372ac8b26d"} Feb 02 07:00:53 crc kubenswrapper[4842]: I0202 07:00:53.529984 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75"] Feb 02 07:00:53 crc kubenswrapper[4842]: W0202 07:00:53.533713 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod412f3125_792a_4cb4_858e_e0376903066a.slice/crio-f521c257c7061e45eb62834ec8a7cf20c56c26d1fb4fb371d74dbb601f4988b6 WatchSource:0}: Error finding container f521c257c7061e45eb62834ec8a7cf20c56c26d1fb4fb371d74dbb601f4988b6: Status 404 returned error can't find the container with id f521c257c7061e45eb62834ec8a7cf20c56c26d1fb4fb371d74dbb601f4988b6 Feb 02 07:00:54 crc kubenswrapper[4842]: I0202 07:00:54.227040 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3016a0a1-abd6-486a-af0b-cf4c7b8db672-memberlist\") pod \"speaker-74hmd\" (UID: \"3016a0a1-abd6-486a-af0b-cf4c7b8db672\") " pod="metallb-system/speaker-74hmd" Feb 02 07:00:54 crc kubenswrapper[4842]: I0202 07:00:54.238139 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3016a0a1-abd6-486a-af0b-cf4c7b8db672-memberlist\") pod \"speaker-74hmd\" (UID: \"3016a0a1-abd6-486a-af0b-cf4c7b8db672\") " pod="metallb-system/speaker-74hmd" Feb 02 07:00:54 crc kubenswrapper[4842]: I0202 07:00:54.273052 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-74hmd" Feb 02 07:00:54 crc kubenswrapper[4842]: W0202 07:00:54.292650 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3016a0a1_abd6_486a_af0b_cf4c7b8db672.slice/crio-7314f95cc89f2fbb1b90aea157628119736d4c00e316e7b9c306ca0928604633 WatchSource:0}: Error finding container 7314f95cc89f2fbb1b90aea157628119736d4c00e316e7b9c306ca0928604633: Status 404 returned error can't find the container with id 7314f95cc89f2fbb1b90aea157628119736d4c00e316e7b9c306ca0928604633 Feb 02 07:00:54 crc kubenswrapper[4842]: I0202 07:00:54.535856 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-74hmd" event={"ID":"3016a0a1-abd6-486a-af0b-cf4c7b8db672","Type":"ContainerStarted","Data":"7314f95cc89f2fbb1b90aea157628119736d4c00e316e7b9c306ca0928604633"} Feb 02 07:00:54 crc kubenswrapper[4842]: I0202 07:00:54.540496 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75" event={"ID":"412f3125-792a-4cb4-858e-e0376903066a","Type":"ContainerStarted","Data":"f521c257c7061e45eb62834ec8a7cf20c56c26d1fb4fb371d74dbb601f4988b6"} Feb 02 07:00:55 crc kubenswrapper[4842]: I0202 07:00:55.466632 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-7h9kp" podStartSLOduration=3.466615633 podStartE2EDuration="3.466615633s" podCreationTimestamp="2026-02-02 07:00:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:00:53.542818334 +0000 UTC m=+878.920086316" watchObservedRunningTime="2026-02-02 07:00:55.466615633 +0000 UTC m=+880.843883535" Feb 02 07:00:55 crc kubenswrapper[4842]: I0202 07:00:55.554582 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-74hmd" event={"ID":"3016a0a1-abd6-486a-af0b-cf4c7b8db672","Type":"ContainerStarted","Data":"43db8442d1563ae29224fbbd0701a1b4df347189ea1bc859f26d34ea5a5ce252"} Feb 02 07:00:55 crc kubenswrapper[4842]: I0202 07:00:55.554625 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-74hmd" event={"ID":"3016a0a1-abd6-486a-af0b-cf4c7b8db672","Type":"ContainerStarted","Data":"671c7439cc5f5922688bd073539a02a0a5964c14fb1abd24c5828de35900fa25"} Feb 02 07:00:55 crc kubenswrapper[4842]: I0202 07:00:55.555351 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-74hmd" Feb 02 07:01:00 crc kubenswrapper[4842]: I0202 07:01:00.596612 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75" event={"ID":"412f3125-792a-4cb4-858e-e0376903066a","Type":"ContainerStarted","Data":"020bc96addd5d327377e6a31361f3fed0f7d394ecb75a60a59988934e8d2d5a0"} Feb 02 07:01:00 crc kubenswrapper[4842]: I0202 07:01:00.597138 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75" Feb 02 07:01:00 crc kubenswrapper[4842]: I0202 07:01:00.600811 4842 generic.go:334] "Generic (PLEG): container finished" podID="79110fb7-d2a2-4330-ab4b-d717a7b943e6" containerID="78e963621fb75711833339baa2efff3c2e3b5d625f9d32fc65d4177236ca375f" exitCode=0 Feb 02 07:01:00 crc kubenswrapper[4842]: I0202 07:01:00.600881 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fvmtq" event={"ID":"79110fb7-d2a2-4330-ab4b-d717a7b943e6","Type":"ContainerDied","Data":"78e963621fb75711833339baa2efff3c2e3b5d625f9d32fc65d4177236ca375f"} Feb 02 07:01:00 crc kubenswrapper[4842]: I0202 07:01:00.616563 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-74hmd" podStartSLOduration=8.616531122 podStartE2EDuration="8.616531122s" podCreationTimestamp="2026-02-02 07:00:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:00:55.615210433 +0000 UTC m=+880.992478345" watchObservedRunningTime="2026-02-02 07:01:00.616531122 +0000 UTC m=+885.993799084" Feb 02 07:01:00 crc kubenswrapper[4842]: I0202 07:01:00.623057 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75" podStartSLOduration=2.112287211 podStartE2EDuration="8.623029802s" podCreationTimestamp="2026-02-02 07:00:52 +0000 UTC" firstStartedPulling="2026-02-02 07:00:53.536351544 +0000 UTC m=+878.913619466" lastFinishedPulling="2026-02-02 07:01:00.047094115 +0000 UTC m=+885.424362057" observedRunningTime="2026-02-02 07:01:00.613759554 +0000 UTC m=+885.991027476" watchObservedRunningTime="2026-02-02 07:01:00.623029802 +0000 UTC m=+886.000297754" Feb 02 07:01:01 crc kubenswrapper[4842]: I0202 07:01:01.611904 4842 generic.go:334] "Generic (PLEG): container finished" podID="79110fb7-d2a2-4330-ab4b-d717a7b943e6" containerID="cd86e7e997837db99ee68635c8a505dfedb823af70b9d37d72b83c4ed6d88c2b" exitCode=0 Feb 02 07:01:01 crc kubenswrapper[4842]: I0202 07:01:01.612022 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fvmtq" event={"ID":"79110fb7-d2a2-4330-ab4b-d717a7b943e6","Type":"ContainerDied","Data":"cd86e7e997837db99ee68635c8a505dfedb823af70b9d37d72b83c4ed6d88c2b"} Feb 02 07:01:02 crc kubenswrapper[4842]: I0202 07:01:02.624862 4842 generic.go:334] "Generic (PLEG): container finished" podID="79110fb7-d2a2-4330-ab4b-d717a7b943e6" containerID="af140bdc1a99d830d21c65581b94a11cd63957551b80f2db7f99e580a1886814" exitCode=0 Feb 02 07:01:02 crc kubenswrapper[4842]: I0202 07:01:02.625001 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fvmtq" event={"ID":"79110fb7-d2a2-4330-ab4b-d717a7b943e6","Type":"ContainerDied","Data":"af140bdc1a99d830d21c65581b94a11cd63957551b80f2db7f99e580a1886814"} Feb 02 07:01:03 crc kubenswrapper[4842]: I0202 07:01:03.652054 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fvmtq" event={"ID":"79110fb7-d2a2-4330-ab4b-d717a7b943e6","Type":"ContainerStarted","Data":"b1fb9fb1718478fa0c4cc12b65cd0801e789795d74f4c12188350256a042a05d"} Feb 02 07:01:03 crc kubenswrapper[4842]: I0202 07:01:03.652428 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fvmtq" event={"ID":"79110fb7-d2a2-4330-ab4b-d717a7b943e6","Type":"ContainerStarted","Data":"f814b5f20b461008e484354f68963ee4388458bd5a761f5d14f77b0da409d365"} Feb 02 07:01:03 crc kubenswrapper[4842]: I0202 07:01:03.652450 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fvmtq" event={"ID":"79110fb7-d2a2-4330-ab4b-d717a7b943e6","Type":"ContainerStarted","Data":"01293b03f1d4c47f3076616d43ceb8750ee09ff878ecb94fe322c7f2e548c684"} Feb 02 07:01:03 crc kubenswrapper[4842]: I0202 07:01:03.652469 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fvmtq" event={"ID":"79110fb7-d2a2-4330-ab4b-d717a7b943e6","Type":"ContainerStarted","Data":"93521f150f2ef4cbe56c0b2a112d3b082ddb7b5d2baba5a4dc3188d1f48f53fc"} Feb 02 07:01:03 crc kubenswrapper[4842]: I0202 07:01:03.652486 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fvmtq" event={"ID":"79110fb7-d2a2-4330-ab4b-d717a7b943e6","Type":"ContainerStarted","Data":"97e04fac79e56e025f2139eaf2a01691780f3c41aa86fcf7e02c6b4f080c6518"} Feb 02 07:01:04 crc kubenswrapper[4842]: I0202 07:01:04.279405 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-74hmd" Feb 02 07:01:04 crc kubenswrapper[4842]: I0202 07:01:04.666868 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fvmtq" event={"ID":"79110fb7-d2a2-4330-ab4b-d717a7b943e6","Type":"ContainerStarted","Data":"549273cdb5500a33a748d465e785dad1ad378ba2a110d1377572139c81cf3255"} Feb 02 07:01:04 crc kubenswrapper[4842]: I0202 07:01:04.667064 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:01:04 crc kubenswrapper[4842]: I0202 07:01:04.696969 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-fvmtq" podStartSLOduration=5.469614473 podStartE2EDuration="12.696953196s" podCreationTimestamp="2026-02-02 07:00:52 +0000 UTC" firstStartedPulling="2026-02-02 07:00:52.856931708 +0000 UTC m=+878.234199620" lastFinishedPulling="2026-02-02 07:01:00.084270421 +0000 UTC m=+885.461538343" observedRunningTime="2026-02-02 07:01:04.696139576 +0000 UTC m=+890.073407498" watchObservedRunningTime="2026-02-02 07:01:04.696953196 +0000 UTC m=+890.074221108" Feb 02 07:01:05 crc kubenswrapper[4842]: I0202 07:01:05.858933 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw"] Feb 02 07:01:05 crc kubenswrapper[4842]: I0202 07:01:05.861900 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" Feb 02 07:01:05 crc kubenswrapper[4842]: I0202 07:01:05.866271 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 07:01:05 crc kubenswrapper[4842]: I0202 07:01:05.870078 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw"] Feb 02 07:01:06 crc kubenswrapper[4842]: I0202 07:01:06.028999 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68358186-3b13-493a-9141-c206629af46e-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw\" (UID: \"68358186-3b13-493a-9141-c206629af46e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" Feb 02 07:01:06 crc kubenswrapper[4842]: I0202 07:01:06.029298 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6smw\" (UniqueName: \"kubernetes.io/projected/68358186-3b13-493a-9141-c206629af46e-kube-api-access-j6smw\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw\" (UID: \"68358186-3b13-493a-9141-c206629af46e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" Feb 02 07:01:06 crc kubenswrapper[4842]: I0202 07:01:06.029425 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68358186-3b13-493a-9141-c206629af46e-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw\" (UID: \"68358186-3b13-493a-9141-c206629af46e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" Feb 02 07:01:06 crc kubenswrapper[4842]: I0202 07:01:06.130460 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6smw\" (UniqueName: \"kubernetes.io/projected/68358186-3b13-493a-9141-c206629af46e-kube-api-access-j6smw\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw\" (UID: \"68358186-3b13-493a-9141-c206629af46e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" Feb 02 07:01:06 crc kubenswrapper[4842]: I0202 07:01:06.130541 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68358186-3b13-493a-9141-c206629af46e-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw\" (UID: \"68358186-3b13-493a-9141-c206629af46e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" Feb 02 07:01:06 crc kubenswrapper[4842]: I0202 07:01:06.130641 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68358186-3b13-493a-9141-c206629af46e-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw\" (UID: \"68358186-3b13-493a-9141-c206629af46e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" Feb 02 07:01:06 crc kubenswrapper[4842]: I0202 07:01:06.131199 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68358186-3b13-493a-9141-c206629af46e-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw\" (UID: \"68358186-3b13-493a-9141-c206629af46e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" Feb 02 07:01:06 crc kubenswrapper[4842]: I0202 07:01:06.131444 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68358186-3b13-493a-9141-c206629af46e-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw\" (UID: \"68358186-3b13-493a-9141-c206629af46e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" Feb 02 07:01:06 crc kubenswrapper[4842]: I0202 07:01:06.157026 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6smw\" (UniqueName: \"kubernetes.io/projected/68358186-3b13-493a-9141-c206629af46e-kube-api-access-j6smw\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw\" (UID: \"68358186-3b13-493a-9141-c206629af46e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" Feb 02 07:01:06 crc kubenswrapper[4842]: I0202 07:01:06.186357 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" Feb 02 07:01:06 crc kubenswrapper[4842]: I0202 07:01:06.660433 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw"] Feb 02 07:01:06 crc kubenswrapper[4842]: W0202 07:01:06.677828 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68358186_3b13_493a_9141_c206629af46e.slice/crio-f18d7bcdbef7cc2848f6400e581b711641602dcf44b0515c2bf081aa68cd1102 WatchSource:0}: Error finding container f18d7bcdbef7cc2848f6400e581b711641602dcf44b0515c2bf081aa68cd1102: Status 404 returned error can't find the container with id f18d7bcdbef7cc2848f6400e581b711641602dcf44b0515c2bf081aa68cd1102 Feb 02 07:01:07 crc kubenswrapper[4842]: I0202 07:01:07.688995 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:01:07 crc kubenswrapper[4842]: I0202 07:01:07.689786 4842 generic.go:334] "Generic (PLEG): container finished" podID="68358186-3b13-493a-9141-c206629af46e" containerID="f7d334b0386fa7d7f040c48b8f37d0d5d3b0e45d2f8371acf22dba51ce3bfb04" exitCode=0 Feb 02 07:01:07 crc kubenswrapper[4842]: I0202 07:01:07.689847 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" event={"ID":"68358186-3b13-493a-9141-c206629af46e","Type":"ContainerDied","Data":"f7d334b0386fa7d7f040c48b8f37d0d5d3b0e45d2f8371acf22dba51ce3bfb04"} Feb 02 07:01:07 crc kubenswrapper[4842]: I0202 07:01:07.689880 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" event={"ID":"68358186-3b13-493a-9141-c206629af46e","Type":"ContainerStarted","Data":"f18d7bcdbef7cc2848f6400e581b711641602dcf44b0515c2bf081aa68cd1102"} Feb 02 07:01:07 crc kubenswrapper[4842]: I0202 07:01:07.766266 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:01:11 crc kubenswrapper[4842]: I0202 07:01:11.743741 4842 generic.go:334] "Generic (PLEG): container finished" podID="68358186-3b13-493a-9141-c206629af46e" containerID="73b7f7d4f7e26bb9f9bc1dab6a87bd9e36d8745b43faf72afee527b98add84a0" exitCode=0 Feb 02 07:01:11 crc kubenswrapper[4842]: I0202 07:01:11.743860 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" event={"ID":"68358186-3b13-493a-9141-c206629af46e","Type":"ContainerDied","Data":"73b7f7d4f7e26bb9f9bc1dab6a87bd9e36d8745b43faf72afee527b98add84a0"} Feb 02 07:01:12 crc kubenswrapper[4842]: I0202 07:01:12.146868 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:01:12 crc kubenswrapper[4842]: I0202 07:01:12.147012 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:01:12 crc kubenswrapper[4842]: I0202 07:01:12.692894 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:01:12 crc kubenswrapper[4842]: I0202 07:01:12.753718 4842 generic.go:334] "Generic (PLEG): container finished" podID="68358186-3b13-493a-9141-c206629af46e" containerID="d8bae5a377ac8095538b04933b8f72015496b12fb4ebc40f444eab2deb29f116" exitCode=0 Feb 02 07:01:12 crc kubenswrapper[4842]: I0202 07:01:12.753764 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" event={"ID":"68358186-3b13-493a-9141-c206629af46e","Type":"ContainerDied","Data":"d8bae5a377ac8095538b04933b8f72015496b12fb4ebc40f444eab2deb29f116"} Feb 02 07:01:12 crc kubenswrapper[4842]: I0202 07:01:12.803994 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-7h9kp" Feb 02 07:01:13 crc kubenswrapper[4842]: I0202 07:01:13.307835 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75" Feb 02 07:01:14 crc kubenswrapper[4842]: I0202 07:01:14.030938 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" Feb 02 07:01:14 crc kubenswrapper[4842]: I0202 07:01:14.155433 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68358186-3b13-493a-9141-c206629af46e-util\") pod \"68358186-3b13-493a-9141-c206629af46e\" (UID: \"68358186-3b13-493a-9141-c206629af46e\") " Feb 02 07:01:14 crc kubenswrapper[4842]: I0202 07:01:14.155793 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68358186-3b13-493a-9141-c206629af46e-bundle\") pod \"68358186-3b13-493a-9141-c206629af46e\" (UID: \"68358186-3b13-493a-9141-c206629af46e\") " Feb 02 07:01:14 crc kubenswrapper[4842]: I0202 07:01:14.156169 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6smw\" (UniqueName: \"kubernetes.io/projected/68358186-3b13-493a-9141-c206629af46e-kube-api-access-j6smw\") pod \"68358186-3b13-493a-9141-c206629af46e\" (UID: \"68358186-3b13-493a-9141-c206629af46e\") " Feb 02 07:01:14 crc kubenswrapper[4842]: I0202 07:01:14.157179 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68358186-3b13-493a-9141-c206629af46e-bundle" (OuterVolumeSpecName: "bundle") pod "68358186-3b13-493a-9141-c206629af46e" (UID: "68358186-3b13-493a-9141-c206629af46e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:01:14 crc kubenswrapper[4842]: I0202 07:01:14.165803 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68358186-3b13-493a-9141-c206629af46e-kube-api-access-j6smw" (OuterVolumeSpecName: "kube-api-access-j6smw") pod "68358186-3b13-493a-9141-c206629af46e" (UID: "68358186-3b13-493a-9141-c206629af46e"). InnerVolumeSpecName "kube-api-access-j6smw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:01:14 crc kubenswrapper[4842]: I0202 07:01:14.168374 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68358186-3b13-493a-9141-c206629af46e-util" (OuterVolumeSpecName: "util") pod "68358186-3b13-493a-9141-c206629af46e" (UID: "68358186-3b13-493a-9141-c206629af46e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:01:14 crc kubenswrapper[4842]: I0202 07:01:14.258581 4842 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68358186-3b13-493a-9141-c206629af46e-util\") on node \"crc\" DevicePath \"\"" Feb 02 07:01:14 crc kubenswrapper[4842]: I0202 07:01:14.258642 4842 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68358186-3b13-493a-9141-c206629af46e-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:01:14 crc kubenswrapper[4842]: I0202 07:01:14.258665 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6smw\" (UniqueName: \"kubernetes.io/projected/68358186-3b13-493a-9141-c206629af46e-kube-api-access-j6smw\") on node \"crc\" DevicePath \"\"" Feb 02 07:01:14 crc kubenswrapper[4842]: I0202 07:01:14.769659 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" event={"ID":"68358186-3b13-493a-9141-c206629af46e","Type":"ContainerDied","Data":"f18d7bcdbef7cc2848f6400e581b711641602dcf44b0515c2bf081aa68cd1102"} Feb 02 07:01:14 crc kubenswrapper[4842]: I0202 07:01:14.769726 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f18d7bcdbef7cc2848f6400e581b711641602dcf44b0515c2bf081aa68cd1102" Feb 02 07:01:14 crc kubenswrapper[4842]: I0202 07:01:14.769740 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.582578 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-x6kcp"] Feb 02 07:01:19 crc kubenswrapper[4842]: E0202 07:01:19.583382 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68358186-3b13-493a-9141-c206629af46e" containerName="pull" Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.583396 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="68358186-3b13-493a-9141-c206629af46e" containerName="pull" Feb 02 07:01:19 crc kubenswrapper[4842]: E0202 07:01:19.583417 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68358186-3b13-493a-9141-c206629af46e" containerName="util" Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.583423 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="68358186-3b13-493a-9141-c206629af46e" containerName="util" Feb 02 07:01:19 crc kubenswrapper[4842]: E0202 07:01:19.583432 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68358186-3b13-493a-9141-c206629af46e" containerName="extract" Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.583438 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="68358186-3b13-493a-9141-c206629af46e" containerName="extract" Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.583558 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="68358186-3b13-493a-9141-c206629af46e" containerName="extract" Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.584017 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-x6kcp" Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.585907 4842 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-mq2bc" Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.586097 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.589530 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.642288 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-x6kcp"] Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.658530 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbrlg\" (UniqueName: \"kubernetes.io/projected/c8aa6122-bb1d-4642-b85f-18a2775e7c64-kube-api-access-rbrlg\") pod \"cert-manager-operator-controller-manager-66c8bdd694-x6kcp\" (UID: \"c8aa6122-bb1d-4642-b85f-18a2775e7c64\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-x6kcp" Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.658625 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c8aa6122-bb1d-4642-b85f-18a2775e7c64-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-x6kcp\" (UID: \"c8aa6122-bb1d-4642-b85f-18a2775e7c64\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-x6kcp" Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.760410 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c8aa6122-bb1d-4642-b85f-18a2775e7c64-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-x6kcp\" (UID: \"c8aa6122-bb1d-4642-b85f-18a2775e7c64\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-x6kcp" Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.760478 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbrlg\" (UniqueName: \"kubernetes.io/projected/c8aa6122-bb1d-4642-b85f-18a2775e7c64-kube-api-access-rbrlg\") pod \"cert-manager-operator-controller-manager-66c8bdd694-x6kcp\" (UID: \"c8aa6122-bb1d-4642-b85f-18a2775e7c64\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-x6kcp" Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.760986 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c8aa6122-bb1d-4642-b85f-18a2775e7c64-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-x6kcp\" (UID: \"c8aa6122-bb1d-4642-b85f-18a2775e7c64\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-x6kcp" Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.784217 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbrlg\" (UniqueName: \"kubernetes.io/projected/c8aa6122-bb1d-4642-b85f-18a2775e7c64-kube-api-access-rbrlg\") pod \"cert-manager-operator-controller-manager-66c8bdd694-x6kcp\" (UID: \"c8aa6122-bb1d-4642-b85f-18a2775e7c64\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-x6kcp" Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.898801 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-x6kcp" Feb 02 07:01:20 crc kubenswrapper[4842]: I0202 07:01:20.128817 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-x6kcp"] Feb 02 07:01:20 crc kubenswrapper[4842]: W0202 07:01:20.137384 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8aa6122_bb1d_4642_b85f_18a2775e7c64.slice/crio-4e1950982c00680b01bb67cbc33fad7853db2861e66552628f27584be941f17e WatchSource:0}: Error finding container 4e1950982c00680b01bb67cbc33fad7853db2861e66552628f27584be941f17e: Status 404 returned error can't find the container with id 4e1950982c00680b01bb67cbc33fad7853db2861e66552628f27584be941f17e Feb 02 07:01:20 crc kubenswrapper[4842]: I0202 07:01:20.809120 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-x6kcp" event={"ID":"c8aa6122-bb1d-4642-b85f-18a2775e7c64","Type":"ContainerStarted","Data":"4e1950982c00680b01bb67cbc33fad7853db2861e66552628f27584be941f17e"} Feb 02 07:01:22 crc kubenswrapper[4842]: I0202 07:01:22.825037 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-x6kcp" event={"ID":"c8aa6122-bb1d-4642-b85f-18a2775e7c64","Type":"ContainerStarted","Data":"88ef9cf7369e4a80ccc0386bc1f82b40331c63c169fa0c87717acc9c5652261d"} Feb 02 07:01:22 crc kubenswrapper[4842]: I0202 07:01:22.842847 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-x6kcp" podStartSLOduration=1.396516804 podStartE2EDuration="3.842830375s" podCreationTimestamp="2026-02-02 07:01:19 +0000 UTC" firstStartedPulling="2026-02-02 07:01:20.139677237 +0000 UTC m=+905.516945149" lastFinishedPulling="2026-02-02 07:01:22.585990808 +0000 UTC m=+907.963258720" observedRunningTime="2026-02-02 07:01:22.838698113 +0000 UTC m=+908.215966035" watchObservedRunningTime="2026-02-02 07:01:22.842830375 +0000 UTC m=+908.220098287" Feb 02 07:01:27 crc kubenswrapper[4842]: I0202 07:01:27.682390 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-hj9fx"] Feb 02 07:01:27 crc kubenswrapper[4842]: I0202 07:01:27.684520 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-hj9fx" Feb 02 07:01:27 crc kubenswrapper[4842]: I0202 07:01:27.691537 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 02 07:01:27 crc kubenswrapper[4842]: I0202 07:01:27.691636 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 02 07:01:27 crc kubenswrapper[4842]: I0202 07:01:27.691725 4842 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-mt7wl" Feb 02 07:01:27 crc kubenswrapper[4842]: I0202 07:01:27.698206 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-hj9fx"] Feb 02 07:01:27 crc kubenswrapper[4842]: I0202 07:01:27.770200 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/466ec5f5-a1b9-439d-a9d6-d5dbbe8d16c9-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-hj9fx\" (UID: \"466ec5f5-a1b9-439d-a9d6-d5dbbe8d16c9\") " pod="cert-manager/cert-manager-webhook-6888856db4-hj9fx" Feb 02 07:01:27 crc kubenswrapper[4842]: I0202 07:01:27.770460 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5lwr\" (UniqueName: \"kubernetes.io/projected/466ec5f5-a1b9-439d-a9d6-d5dbbe8d16c9-kube-api-access-p5lwr\") pod \"cert-manager-webhook-6888856db4-hj9fx\" (UID: \"466ec5f5-a1b9-439d-a9d6-d5dbbe8d16c9\") " pod="cert-manager/cert-manager-webhook-6888856db4-hj9fx" Feb 02 07:01:27 crc kubenswrapper[4842]: I0202 07:01:27.872029 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5lwr\" (UniqueName: \"kubernetes.io/projected/466ec5f5-a1b9-439d-a9d6-d5dbbe8d16c9-kube-api-access-p5lwr\") pod \"cert-manager-webhook-6888856db4-hj9fx\" (UID: \"466ec5f5-a1b9-439d-a9d6-d5dbbe8d16c9\") " pod="cert-manager/cert-manager-webhook-6888856db4-hj9fx" Feb 02 07:01:27 crc kubenswrapper[4842]: I0202 07:01:27.872215 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/466ec5f5-a1b9-439d-a9d6-d5dbbe8d16c9-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-hj9fx\" (UID: \"466ec5f5-a1b9-439d-a9d6-d5dbbe8d16c9\") " pod="cert-manager/cert-manager-webhook-6888856db4-hj9fx" Feb 02 07:01:27 crc kubenswrapper[4842]: I0202 07:01:27.912519 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/466ec5f5-a1b9-439d-a9d6-d5dbbe8d16c9-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-hj9fx\" (UID: \"466ec5f5-a1b9-439d-a9d6-d5dbbe8d16c9\") " pod="cert-manager/cert-manager-webhook-6888856db4-hj9fx" Feb 02 07:01:27 crc kubenswrapper[4842]: I0202 07:01:27.922314 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5lwr\" (UniqueName: \"kubernetes.io/projected/466ec5f5-a1b9-439d-a9d6-d5dbbe8d16c9-kube-api-access-p5lwr\") pod \"cert-manager-webhook-6888856db4-hj9fx\" (UID: \"466ec5f5-a1b9-439d-a9d6-d5dbbe8d16c9\") " pod="cert-manager/cert-manager-webhook-6888856db4-hj9fx" Feb 02 07:01:28 crc kubenswrapper[4842]: I0202 07:01:28.010856 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-hj9fx" Feb 02 07:01:28 crc kubenswrapper[4842]: I0202 07:01:28.515868 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-j6288"] Feb 02 07:01:28 crc kubenswrapper[4842]: I0202 07:01:28.516937 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-j6288" Feb 02 07:01:28 crc kubenswrapper[4842]: I0202 07:01:28.521796 4842 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-qppzd" Feb 02 07:01:28 crc kubenswrapper[4842]: I0202 07:01:28.537485 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-j6288"] Feb 02 07:01:28 crc kubenswrapper[4842]: W0202 07:01:28.539486 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod466ec5f5_a1b9_439d_a9d6_d5dbbe8d16c9.slice/crio-367e0cd12f3606c7d0f457767139838217f35ee03fbf4052d2d08e2a6e49d112 WatchSource:0}: Error finding container 367e0cd12f3606c7d0f457767139838217f35ee03fbf4052d2d08e2a6e49d112: Status 404 returned error can't find the container with id 367e0cd12f3606c7d0f457767139838217f35ee03fbf4052d2d08e2a6e49d112 Feb 02 07:01:28 crc kubenswrapper[4842]: I0202 07:01:28.549571 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-hj9fx"] Feb 02 07:01:28 crc kubenswrapper[4842]: I0202 07:01:28.594399 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d7710841-a6c0-41ce-a408-f5940ab76922-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-j6288\" (UID: \"d7710841-a6c0-41ce-a408-f5940ab76922\") " pod="cert-manager/cert-manager-cainjector-5545bd876-j6288" Feb 02 07:01:28 crc kubenswrapper[4842]: I0202 07:01:28.594470 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twpfc\" (UniqueName: \"kubernetes.io/projected/d7710841-a6c0-41ce-a408-f5940ab76922-kube-api-access-twpfc\") pod \"cert-manager-cainjector-5545bd876-j6288\" (UID: \"d7710841-a6c0-41ce-a408-f5940ab76922\") " pod="cert-manager/cert-manager-cainjector-5545bd876-j6288" Feb 02 07:01:28 crc kubenswrapper[4842]: I0202 07:01:28.695731 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d7710841-a6c0-41ce-a408-f5940ab76922-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-j6288\" (UID: \"d7710841-a6c0-41ce-a408-f5940ab76922\") " pod="cert-manager/cert-manager-cainjector-5545bd876-j6288" Feb 02 07:01:28 crc kubenswrapper[4842]: I0202 07:01:28.695779 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twpfc\" (UniqueName: \"kubernetes.io/projected/d7710841-a6c0-41ce-a408-f5940ab76922-kube-api-access-twpfc\") pod \"cert-manager-cainjector-5545bd876-j6288\" (UID: \"d7710841-a6c0-41ce-a408-f5940ab76922\") " pod="cert-manager/cert-manager-cainjector-5545bd876-j6288" Feb 02 07:01:28 crc kubenswrapper[4842]: I0202 07:01:28.715011 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d7710841-a6c0-41ce-a408-f5940ab76922-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-j6288\" (UID: \"d7710841-a6c0-41ce-a408-f5940ab76922\") " pod="cert-manager/cert-manager-cainjector-5545bd876-j6288" Feb 02 07:01:28 crc kubenswrapper[4842]: I0202 07:01:28.715564 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twpfc\" (UniqueName: \"kubernetes.io/projected/d7710841-a6c0-41ce-a408-f5940ab76922-kube-api-access-twpfc\") pod \"cert-manager-cainjector-5545bd876-j6288\" (UID: \"d7710841-a6c0-41ce-a408-f5940ab76922\") " pod="cert-manager/cert-manager-cainjector-5545bd876-j6288" Feb 02 07:01:28 crc kubenswrapper[4842]: I0202 07:01:28.834036 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-j6288" Feb 02 07:01:28 crc kubenswrapper[4842]: I0202 07:01:28.884724 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-hj9fx" event={"ID":"466ec5f5-a1b9-439d-a9d6-d5dbbe8d16c9","Type":"ContainerStarted","Data":"367e0cd12f3606c7d0f457767139838217f35ee03fbf4052d2d08e2a6e49d112"} Feb 02 07:01:29 crc kubenswrapper[4842]: I0202 07:01:29.135426 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-j6288"] Feb 02 07:01:29 crc kubenswrapper[4842]: W0202 07:01:29.138172 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7710841_a6c0_41ce_a408_f5940ab76922.slice/crio-cb3c426e42420706431002f6a7461fa16f62c4fc3c5bab220de53f4bb34144b2 WatchSource:0}: Error finding container cb3c426e42420706431002f6a7461fa16f62c4fc3c5bab220de53f4bb34144b2: Status 404 returned error can't find the container with id cb3c426e42420706431002f6a7461fa16f62c4fc3c5bab220de53f4bb34144b2 Feb 02 07:01:29 crc kubenswrapper[4842]: I0202 07:01:29.893130 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-j6288" event={"ID":"d7710841-a6c0-41ce-a408-f5940ab76922","Type":"ContainerStarted","Data":"cb3c426e42420706431002f6a7461fa16f62c4fc3c5bab220de53f4bb34144b2"} Feb 02 07:01:32 crc kubenswrapper[4842]: I0202 07:01:32.920100 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-j6288" event={"ID":"d7710841-a6c0-41ce-a408-f5940ab76922","Type":"ContainerStarted","Data":"c69e254b8d125d2465d2054062400003c2056193d2c7f1e597b8d202c9475790"} Feb 02 07:01:32 crc kubenswrapper[4842]: I0202 07:01:32.922904 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-hj9fx" event={"ID":"466ec5f5-a1b9-439d-a9d6-d5dbbe8d16c9","Type":"ContainerStarted","Data":"2d2659bcd7c0355849fbe98b1acb7681fddf44409d9ddd7f85f0b53858a32f6c"} Feb 02 07:01:32 crc kubenswrapper[4842]: I0202 07:01:32.923068 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-hj9fx" Feb 02 07:01:32 crc kubenswrapper[4842]: I0202 07:01:32.940120 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-j6288" podStartSLOduration=1.5371201970000001 podStartE2EDuration="4.940094403s" podCreationTimestamp="2026-02-02 07:01:28 +0000 UTC" firstStartedPulling="2026-02-02 07:01:29.141288936 +0000 UTC m=+914.518556848" lastFinishedPulling="2026-02-02 07:01:32.544263152 +0000 UTC m=+917.921531054" observedRunningTime="2026-02-02 07:01:32.937113229 +0000 UTC m=+918.314381161" watchObservedRunningTime="2026-02-02 07:01:32.940094403 +0000 UTC m=+918.317362335" Feb 02 07:01:36 crc kubenswrapper[4842]: I0202 07:01:36.906609 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-hj9fx" podStartSLOduration=5.92537136 podStartE2EDuration="9.90658345s" podCreationTimestamp="2026-02-02 07:01:27 +0000 UTC" firstStartedPulling="2026-02-02 07:01:28.543867179 +0000 UTC m=+913.921135111" lastFinishedPulling="2026-02-02 07:01:32.525079269 +0000 UTC m=+917.902347201" observedRunningTime="2026-02-02 07:01:32.97571936 +0000 UTC m=+918.352987292" watchObservedRunningTime="2026-02-02 07:01:36.90658345 +0000 UTC m=+922.283851392" Feb 02 07:01:36 crc kubenswrapper[4842]: I0202 07:01:36.910400 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9x2pr"] Feb 02 07:01:36 crc kubenswrapper[4842]: I0202 07:01:36.912181 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:36 crc kubenswrapper[4842]: I0202 07:01:36.939609 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9x2pr"] Feb 02 07:01:37 crc kubenswrapper[4842]: I0202 07:01:37.051904 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/548f8a7f-3f38-498d-999a-96753854d869-utilities\") pod \"community-operators-9x2pr\" (UID: \"548f8a7f-3f38-498d-999a-96753854d869\") " pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:37 crc kubenswrapper[4842]: I0202 07:01:37.051963 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/548f8a7f-3f38-498d-999a-96753854d869-catalog-content\") pod \"community-operators-9x2pr\" (UID: \"548f8a7f-3f38-498d-999a-96753854d869\") " pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:37 crc kubenswrapper[4842]: I0202 07:01:37.052153 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjtkr\" (UniqueName: \"kubernetes.io/projected/548f8a7f-3f38-498d-999a-96753854d869-kube-api-access-cjtkr\") pod \"community-operators-9x2pr\" (UID: \"548f8a7f-3f38-498d-999a-96753854d869\") " pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:37 crc kubenswrapper[4842]: I0202 07:01:37.153664 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/548f8a7f-3f38-498d-999a-96753854d869-utilities\") pod \"community-operators-9x2pr\" (UID: \"548f8a7f-3f38-498d-999a-96753854d869\") " pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:37 crc kubenswrapper[4842]: I0202 07:01:37.153719 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/548f8a7f-3f38-498d-999a-96753854d869-catalog-content\") pod \"community-operators-9x2pr\" (UID: \"548f8a7f-3f38-498d-999a-96753854d869\") " pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:37 crc kubenswrapper[4842]: I0202 07:01:37.153755 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjtkr\" (UniqueName: \"kubernetes.io/projected/548f8a7f-3f38-498d-999a-96753854d869-kube-api-access-cjtkr\") pod \"community-operators-9x2pr\" (UID: \"548f8a7f-3f38-498d-999a-96753854d869\") " pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:37 crc kubenswrapper[4842]: I0202 07:01:37.154260 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/548f8a7f-3f38-498d-999a-96753854d869-utilities\") pod \"community-operators-9x2pr\" (UID: \"548f8a7f-3f38-498d-999a-96753854d869\") " pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:37 crc kubenswrapper[4842]: I0202 07:01:37.154362 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/548f8a7f-3f38-498d-999a-96753854d869-catalog-content\") pod \"community-operators-9x2pr\" (UID: \"548f8a7f-3f38-498d-999a-96753854d869\") " pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:37 crc kubenswrapper[4842]: I0202 07:01:37.175617 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjtkr\" (UniqueName: \"kubernetes.io/projected/548f8a7f-3f38-498d-999a-96753854d869-kube-api-access-cjtkr\") pod \"community-operators-9x2pr\" (UID: \"548f8a7f-3f38-498d-999a-96753854d869\") " pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:37 crc kubenswrapper[4842]: I0202 07:01:37.243866 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:37 crc kubenswrapper[4842]: I0202 07:01:37.519502 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9x2pr"] Feb 02 07:01:37 crc kubenswrapper[4842]: I0202 07:01:37.955176 4842 generic.go:334] "Generic (PLEG): container finished" podID="548f8a7f-3f38-498d-999a-96753854d869" containerID="9ef439958063543ffefcc622a6723ccbf9efbfd50a080c5c02fc8a278f317150" exitCode=0 Feb 02 07:01:37 crc kubenswrapper[4842]: I0202 07:01:37.955276 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x2pr" event={"ID":"548f8a7f-3f38-498d-999a-96753854d869","Type":"ContainerDied","Data":"9ef439958063543ffefcc622a6723ccbf9efbfd50a080c5c02fc8a278f317150"} Feb 02 07:01:37 crc kubenswrapper[4842]: I0202 07:01:37.957374 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x2pr" event={"ID":"548f8a7f-3f38-498d-999a-96753854d869","Type":"ContainerStarted","Data":"ce6dfceaa02df9a199ab688a09ffe666908265586305bd31da204ee7ec4758f8"} Feb 02 07:01:38 crc kubenswrapper[4842]: I0202 07:01:38.013554 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-hj9fx" Feb 02 07:01:38 crc kubenswrapper[4842]: I0202 07:01:38.963922 4842 generic.go:334] "Generic (PLEG): container finished" podID="548f8a7f-3f38-498d-999a-96753854d869" containerID="e66cbb71a2dedb397c7fc4b4685876616e2f695ee571fb00c881428409fca0e8" exitCode=0 Feb 02 07:01:38 crc kubenswrapper[4842]: I0202 07:01:38.964275 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x2pr" event={"ID":"548f8a7f-3f38-498d-999a-96753854d869","Type":"ContainerDied","Data":"e66cbb71a2dedb397c7fc4b4685876616e2f695ee571fb00c881428409fca0e8"} Feb 02 07:01:39 crc kubenswrapper[4842]: E0202 07:01:39.019422 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod548f8a7f_3f38_498d_999a_96753854d869.slice/crio-e66cbb71a2dedb397c7fc4b4685876616e2f695ee571fb00c881428409fca0e8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod548f8a7f_3f38_498d_999a_96753854d869.slice/crio-conmon-e66cbb71a2dedb397c7fc4b4685876616e2f695ee571fb00c881428409fca0e8.scope\": RecentStats: unable to find data in memory cache]" Feb 02 07:01:39 crc kubenswrapper[4842]: I0202 07:01:39.973302 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x2pr" event={"ID":"548f8a7f-3f38-498d-999a-96753854d869","Type":"ContainerStarted","Data":"a98d41c6f99ea7e40f7729326ae77423d9f4923ba69dc78c96d670c40dcc93b2"} Feb 02 07:01:39 crc kubenswrapper[4842]: I0202 07:01:39.991637 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9x2pr" podStartSLOduration=2.588904831 podStartE2EDuration="3.991621874s" podCreationTimestamp="2026-02-02 07:01:36 +0000 UTC" firstStartedPulling="2026-02-02 07:01:37.95720413 +0000 UTC m=+923.334472042" lastFinishedPulling="2026-02-02 07:01:39.359921163 +0000 UTC m=+924.737189085" observedRunningTime="2026-02-02 07:01:39.990466136 +0000 UTC m=+925.367734088" watchObservedRunningTime="2026-02-02 07:01:39.991621874 +0000 UTC m=+925.368889786" Feb 02 07:01:41 crc kubenswrapper[4842]: I0202 07:01:41.873499 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ltrf2"] Feb 02 07:01:41 crc kubenswrapper[4842]: I0202 07:01:41.874939 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:01:41 crc kubenswrapper[4842]: I0202 07:01:41.890308 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ltrf2"] Feb 02 07:01:42 crc kubenswrapper[4842]: I0202 07:01:42.032131 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80cf1b43-3437-4ef8-b9c7-a8bd77270228-utilities\") pod \"certified-operators-ltrf2\" (UID: \"80cf1b43-3437-4ef8-b9c7-a8bd77270228\") " pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:01:42 crc kubenswrapper[4842]: I0202 07:01:42.032440 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7trm\" (UniqueName: \"kubernetes.io/projected/80cf1b43-3437-4ef8-b9c7-a8bd77270228-kube-api-access-l7trm\") pod \"certified-operators-ltrf2\" (UID: \"80cf1b43-3437-4ef8-b9c7-a8bd77270228\") " pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:01:42 crc kubenswrapper[4842]: I0202 07:01:42.032477 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80cf1b43-3437-4ef8-b9c7-a8bd77270228-catalog-content\") pod \"certified-operators-ltrf2\" (UID: \"80cf1b43-3437-4ef8-b9c7-a8bd77270228\") " pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:01:42 crc kubenswrapper[4842]: I0202 07:01:42.133042 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80cf1b43-3437-4ef8-b9c7-a8bd77270228-utilities\") pod \"certified-operators-ltrf2\" (UID: \"80cf1b43-3437-4ef8-b9c7-a8bd77270228\") " pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:01:42 crc kubenswrapper[4842]: I0202 07:01:42.133100 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7trm\" (UniqueName: \"kubernetes.io/projected/80cf1b43-3437-4ef8-b9c7-a8bd77270228-kube-api-access-l7trm\") pod \"certified-operators-ltrf2\" (UID: \"80cf1b43-3437-4ef8-b9c7-a8bd77270228\") " pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:01:42 crc kubenswrapper[4842]: I0202 07:01:42.133133 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80cf1b43-3437-4ef8-b9c7-a8bd77270228-catalog-content\") pod \"certified-operators-ltrf2\" (UID: \"80cf1b43-3437-4ef8-b9c7-a8bd77270228\") " pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:01:42 crc kubenswrapper[4842]: I0202 07:01:42.133774 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80cf1b43-3437-4ef8-b9c7-a8bd77270228-utilities\") pod \"certified-operators-ltrf2\" (UID: \"80cf1b43-3437-4ef8-b9c7-a8bd77270228\") " pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:01:42 crc kubenswrapper[4842]: I0202 07:01:42.133798 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80cf1b43-3437-4ef8-b9c7-a8bd77270228-catalog-content\") pod \"certified-operators-ltrf2\" (UID: \"80cf1b43-3437-4ef8-b9c7-a8bd77270228\") " pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:01:42 crc kubenswrapper[4842]: I0202 07:01:42.145733 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:01:42 crc kubenswrapper[4842]: I0202 07:01:42.145815 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:01:42 crc kubenswrapper[4842]: I0202 07:01:42.145895 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 07:01:42 crc kubenswrapper[4842]: I0202 07:01:42.146758 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"409dfa164f76008135fd93bb209c464e3603214d524a9798b15a0c8226203f93"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 07:01:42 crc kubenswrapper[4842]: I0202 07:01:42.146861 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://409dfa164f76008135fd93bb209c464e3603214d524a9798b15a0c8226203f93" gracePeriod=600 Feb 02 07:01:42 crc kubenswrapper[4842]: I0202 07:01:42.157055 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7trm\" (UniqueName: \"kubernetes.io/projected/80cf1b43-3437-4ef8-b9c7-a8bd77270228-kube-api-access-l7trm\") pod \"certified-operators-ltrf2\" (UID: \"80cf1b43-3437-4ef8-b9c7-a8bd77270228\") " pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:01:42 crc kubenswrapper[4842]: I0202 07:01:42.191245 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:01:42 crc kubenswrapper[4842]: I0202 07:01:42.530171 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ltrf2"] Feb 02 07:01:43 crc kubenswrapper[4842]: I0202 07:01:42.999884 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="409dfa164f76008135fd93bb209c464e3603214d524a9798b15a0c8226203f93" exitCode=0 Feb 02 07:01:43 crc kubenswrapper[4842]: I0202 07:01:43.000282 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"409dfa164f76008135fd93bb209c464e3603214d524a9798b15a0c8226203f93"} Feb 02 07:01:43 crc kubenswrapper[4842]: I0202 07:01:43.000325 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"fb1eaa0cb5ca379afdcc3758df45691293fe02d27ef7a46aa4f4235e0fb79a62"} Feb 02 07:01:43 crc kubenswrapper[4842]: I0202 07:01:43.000352 4842 scope.go:117] "RemoveContainer" containerID="75f797a8d8f9d999a2baca9e47391a8e34aa160a2187acfaf76eee81d7b0ee62" Feb 02 07:01:43 crc kubenswrapper[4842]: I0202 07:01:43.005617 4842 generic.go:334] "Generic (PLEG): container finished" podID="80cf1b43-3437-4ef8-b9c7-a8bd77270228" containerID="331609fa40669bd7840b308a9666007c56af8aa738cc0b311b0bd226734f37d3" exitCode=0 Feb 02 07:01:43 crc kubenswrapper[4842]: I0202 07:01:43.005685 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltrf2" event={"ID":"80cf1b43-3437-4ef8-b9c7-a8bd77270228","Type":"ContainerDied","Data":"331609fa40669bd7840b308a9666007c56af8aa738cc0b311b0bd226734f37d3"} Feb 02 07:01:43 crc kubenswrapper[4842]: I0202 07:01:43.005725 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltrf2" event={"ID":"80cf1b43-3437-4ef8-b9c7-a8bd77270228","Type":"ContainerStarted","Data":"10d6bb6305708264d17e0f259712182618a08b4e23d2fdb9d6c3dec64e76c9e2"} Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.018723 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltrf2" event={"ID":"80cf1b43-3437-4ef8-b9c7-a8bd77270228","Type":"ContainerStarted","Data":"5aaf954be58c33d0b0d73bce7116e84abb016b1ce966f94de9fa66d4258dc108"} Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.212311 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-446xj"] Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.214625 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-446xj" Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.218684 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-446xj"] Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.221806 4842 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-n97wc" Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.273975 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkgkg\" (UniqueName: \"kubernetes.io/projected/ffbe6b41-d1da-4aec-bbfd-376c2f53a962-kube-api-access-rkgkg\") pod \"cert-manager-545d4d4674-446xj\" (UID: \"ffbe6b41-d1da-4aec-bbfd-376c2f53a962\") " pod="cert-manager/cert-manager-545d4d4674-446xj" Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.274041 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ffbe6b41-d1da-4aec-bbfd-376c2f53a962-bound-sa-token\") pod \"cert-manager-545d4d4674-446xj\" (UID: \"ffbe6b41-d1da-4aec-bbfd-376c2f53a962\") " pod="cert-manager/cert-manager-545d4d4674-446xj" Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.375034 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkgkg\" (UniqueName: \"kubernetes.io/projected/ffbe6b41-d1da-4aec-bbfd-376c2f53a962-kube-api-access-rkgkg\") pod \"cert-manager-545d4d4674-446xj\" (UID: \"ffbe6b41-d1da-4aec-bbfd-376c2f53a962\") " pod="cert-manager/cert-manager-545d4d4674-446xj" Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.375210 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ffbe6b41-d1da-4aec-bbfd-376c2f53a962-bound-sa-token\") pod \"cert-manager-545d4d4674-446xj\" (UID: \"ffbe6b41-d1da-4aec-bbfd-376c2f53a962\") " pod="cert-manager/cert-manager-545d4d4674-446xj" Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.394178 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkgkg\" (UniqueName: \"kubernetes.io/projected/ffbe6b41-d1da-4aec-bbfd-376c2f53a962-kube-api-access-rkgkg\") pod \"cert-manager-545d4d4674-446xj\" (UID: \"ffbe6b41-d1da-4aec-bbfd-376c2f53a962\") " pod="cert-manager/cert-manager-545d4d4674-446xj" Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.394283 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ffbe6b41-d1da-4aec-bbfd-376c2f53a962-bound-sa-token\") pod \"cert-manager-545d4d4674-446xj\" (UID: \"ffbe6b41-d1da-4aec-bbfd-376c2f53a962\") " pod="cert-manager/cert-manager-545d4d4674-446xj" Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.530638 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-446xj" Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.812807 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-446xj"] Feb 02 07:01:44 crc kubenswrapper[4842]: W0202 07:01:44.816996 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffbe6b41_d1da_4aec_bbfd_376c2f53a962.slice/crio-0969521bc1cbf0ff59b56fc155eebe48632bcfe80b5c83a6367c40beba537a8e WatchSource:0}: Error finding container 0969521bc1cbf0ff59b56fc155eebe48632bcfe80b5c83a6367c40beba537a8e: Status 404 returned error can't find the container with id 0969521bc1cbf0ff59b56fc155eebe48632bcfe80b5c83a6367c40beba537a8e Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.878814 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xqgvd"] Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.880200 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.893804 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xqgvd"] Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.983998 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0958b9f3-ea26-4013-9a68-3cf94fa2b557-catalog-content\") pod \"redhat-marketplace-xqgvd\" (UID: \"0958b9f3-ea26-4013-9a68-3cf94fa2b557\") " pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.984347 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0958b9f3-ea26-4013-9a68-3cf94fa2b557-utilities\") pod \"redhat-marketplace-xqgvd\" (UID: \"0958b9f3-ea26-4013-9a68-3cf94fa2b557\") " pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.984405 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdpzj\" (UniqueName: \"kubernetes.io/projected/0958b9f3-ea26-4013-9a68-3cf94fa2b557-kube-api-access-vdpzj\") pod \"redhat-marketplace-xqgvd\" (UID: \"0958b9f3-ea26-4013-9a68-3cf94fa2b557\") " pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:01:45 crc kubenswrapper[4842]: I0202 07:01:45.027173 4842 generic.go:334] "Generic (PLEG): container finished" podID="80cf1b43-3437-4ef8-b9c7-a8bd77270228" containerID="5aaf954be58c33d0b0d73bce7116e84abb016b1ce966f94de9fa66d4258dc108" exitCode=0 Feb 02 07:01:45 crc kubenswrapper[4842]: I0202 07:01:45.027263 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltrf2" event={"ID":"80cf1b43-3437-4ef8-b9c7-a8bd77270228","Type":"ContainerDied","Data":"5aaf954be58c33d0b0d73bce7116e84abb016b1ce966f94de9fa66d4258dc108"} Feb 02 07:01:45 crc kubenswrapper[4842]: I0202 07:01:45.027308 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltrf2" event={"ID":"80cf1b43-3437-4ef8-b9c7-a8bd77270228","Type":"ContainerStarted","Data":"18aeb459fdeac67d76d40df4822fb79462c6686bd06d747776d24de4f55ddec6"} Feb 02 07:01:45 crc kubenswrapper[4842]: I0202 07:01:45.028799 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-446xj" event={"ID":"ffbe6b41-d1da-4aec-bbfd-376c2f53a962","Type":"ContainerStarted","Data":"9682e516fdd55be77931ab601a32dc9b2a374c2ff0e637c3453756d90a6a4093"} Feb 02 07:01:45 crc kubenswrapper[4842]: I0202 07:01:45.028834 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-446xj" event={"ID":"ffbe6b41-d1da-4aec-bbfd-376c2f53a962","Type":"ContainerStarted","Data":"0969521bc1cbf0ff59b56fc155eebe48632bcfe80b5c83a6367c40beba537a8e"} Feb 02 07:01:45 crc kubenswrapper[4842]: I0202 07:01:45.049455 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ltrf2" podStartSLOduration=2.4133591819999998 podStartE2EDuration="4.049436084s" podCreationTimestamp="2026-02-02 07:01:41 +0000 UTC" firstStartedPulling="2026-02-02 07:01:43.007817923 +0000 UTC m=+928.385085875" lastFinishedPulling="2026-02-02 07:01:44.643894865 +0000 UTC m=+930.021162777" observedRunningTime="2026-02-02 07:01:45.045578789 +0000 UTC m=+930.422846711" watchObservedRunningTime="2026-02-02 07:01:45.049436084 +0000 UTC m=+930.426704006" Feb 02 07:01:45 crc kubenswrapper[4842]: I0202 07:01:45.065971 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-446xj" podStartSLOduration=1.065953381 podStartE2EDuration="1.065953381s" podCreationTimestamp="2026-02-02 07:01:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:01:45.061120822 +0000 UTC m=+930.438388754" watchObservedRunningTime="2026-02-02 07:01:45.065953381 +0000 UTC m=+930.443221293" Feb 02 07:01:45 crc kubenswrapper[4842]: I0202 07:01:45.085559 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0958b9f3-ea26-4013-9a68-3cf94fa2b557-catalog-content\") pod \"redhat-marketplace-xqgvd\" (UID: \"0958b9f3-ea26-4013-9a68-3cf94fa2b557\") " pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:01:45 crc kubenswrapper[4842]: I0202 07:01:45.085613 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0958b9f3-ea26-4013-9a68-3cf94fa2b557-utilities\") pod \"redhat-marketplace-xqgvd\" (UID: \"0958b9f3-ea26-4013-9a68-3cf94fa2b557\") " pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:01:45 crc kubenswrapper[4842]: I0202 07:01:45.085642 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdpzj\" (UniqueName: \"kubernetes.io/projected/0958b9f3-ea26-4013-9a68-3cf94fa2b557-kube-api-access-vdpzj\") pod \"redhat-marketplace-xqgvd\" (UID: \"0958b9f3-ea26-4013-9a68-3cf94fa2b557\") " pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:01:45 crc kubenswrapper[4842]: I0202 07:01:45.086280 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0958b9f3-ea26-4013-9a68-3cf94fa2b557-utilities\") pod \"redhat-marketplace-xqgvd\" (UID: \"0958b9f3-ea26-4013-9a68-3cf94fa2b557\") " pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:01:45 crc kubenswrapper[4842]: I0202 07:01:45.086356 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0958b9f3-ea26-4013-9a68-3cf94fa2b557-catalog-content\") pod \"redhat-marketplace-xqgvd\" (UID: \"0958b9f3-ea26-4013-9a68-3cf94fa2b557\") " pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:01:45 crc kubenswrapper[4842]: I0202 07:01:45.104632 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdpzj\" (UniqueName: \"kubernetes.io/projected/0958b9f3-ea26-4013-9a68-3cf94fa2b557-kube-api-access-vdpzj\") pod \"redhat-marketplace-xqgvd\" (UID: \"0958b9f3-ea26-4013-9a68-3cf94fa2b557\") " pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:01:45 crc kubenswrapper[4842]: I0202 07:01:45.202523 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:01:45 crc kubenswrapper[4842]: I0202 07:01:45.413488 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xqgvd"] Feb 02 07:01:45 crc kubenswrapper[4842]: W0202 07:01:45.420729 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0958b9f3_ea26_4013_9a68_3cf94fa2b557.slice/crio-e9e4e405947ab052bd7a0b77475b906abce643a4251b0c881206046268bc25b4 WatchSource:0}: Error finding container e9e4e405947ab052bd7a0b77475b906abce643a4251b0c881206046268bc25b4: Status 404 returned error can't find the container with id e9e4e405947ab052bd7a0b77475b906abce643a4251b0c881206046268bc25b4 Feb 02 07:01:46 crc kubenswrapper[4842]: I0202 07:01:46.038578 4842 generic.go:334] "Generic (PLEG): container finished" podID="0958b9f3-ea26-4013-9a68-3cf94fa2b557" containerID="9a1b5c3686f5d2f888d619760c2c6f065e2cfcbdb7a7c316780928bdc983a404" exitCode=0 Feb 02 07:01:46 crc kubenswrapper[4842]: I0202 07:01:46.038626 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xqgvd" event={"ID":"0958b9f3-ea26-4013-9a68-3cf94fa2b557","Type":"ContainerDied","Data":"9a1b5c3686f5d2f888d619760c2c6f065e2cfcbdb7a7c316780928bdc983a404"} Feb 02 07:01:46 crc kubenswrapper[4842]: I0202 07:01:46.038678 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xqgvd" event={"ID":"0958b9f3-ea26-4013-9a68-3cf94fa2b557","Type":"ContainerStarted","Data":"e9e4e405947ab052bd7a0b77475b906abce643a4251b0c881206046268bc25b4"} Feb 02 07:01:47 crc kubenswrapper[4842]: I0202 07:01:47.049814 4842 generic.go:334] "Generic (PLEG): container finished" podID="0958b9f3-ea26-4013-9a68-3cf94fa2b557" containerID="6562c4eba25712b89a5e8c0ada8a664aed0995c58fefc7d0c3c227145bba8a32" exitCode=0 Feb 02 07:01:47 crc kubenswrapper[4842]: I0202 07:01:47.050451 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xqgvd" event={"ID":"0958b9f3-ea26-4013-9a68-3cf94fa2b557","Type":"ContainerDied","Data":"6562c4eba25712b89a5e8c0ada8a664aed0995c58fefc7d0c3c227145bba8a32"} Feb 02 07:01:47 crc kubenswrapper[4842]: I0202 07:01:47.244337 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:47 crc kubenswrapper[4842]: I0202 07:01:47.244418 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:47 crc kubenswrapper[4842]: I0202 07:01:47.308865 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:48 crc kubenswrapper[4842]: I0202 07:01:48.061865 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xqgvd" event={"ID":"0958b9f3-ea26-4013-9a68-3cf94fa2b557","Type":"ContainerStarted","Data":"4192676feafbcbc6ca121e46aa534c0ceaaf73d1dd6f36b6528914037c4f83bf"} Feb 02 07:01:48 crc kubenswrapper[4842]: I0202 07:01:48.123129 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:48 crc kubenswrapper[4842]: I0202 07:01:48.149673 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xqgvd" podStartSLOduration=2.699414389 podStartE2EDuration="4.149649883s" podCreationTimestamp="2026-02-02 07:01:44 +0000 UTC" firstStartedPulling="2026-02-02 07:01:46.04101153 +0000 UTC m=+931.418279442" lastFinishedPulling="2026-02-02 07:01:47.491247014 +0000 UTC m=+932.868514936" observedRunningTime="2026-02-02 07:01:48.083564405 +0000 UTC m=+933.460832327" watchObservedRunningTime="2026-02-02 07:01:48.149649883 +0000 UTC m=+933.526917815" Feb 02 07:01:50 crc kubenswrapper[4842]: I0202 07:01:50.265591 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9x2pr"] Feb 02 07:01:50 crc kubenswrapper[4842]: I0202 07:01:50.266245 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9x2pr" podUID="548f8a7f-3f38-498d-999a-96753854d869" containerName="registry-server" containerID="cri-o://a98d41c6f99ea7e40f7729326ae77423d9f4923ba69dc78c96d670c40dcc93b2" gracePeriod=2 Feb 02 07:01:50 crc kubenswrapper[4842]: I0202 07:01:50.967334 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.072104 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/548f8a7f-3f38-498d-999a-96753854d869-utilities\") pod \"548f8a7f-3f38-498d-999a-96753854d869\" (UID: \"548f8a7f-3f38-498d-999a-96753854d869\") " Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.072191 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/548f8a7f-3f38-498d-999a-96753854d869-catalog-content\") pod \"548f8a7f-3f38-498d-999a-96753854d869\" (UID: \"548f8a7f-3f38-498d-999a-96753854d869\") " Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.072299 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjtkr\" (UniqueName: \"kubernetes.io/projected/548f8a7f-3f38-498d-999a-96753854d869-kube-api-access-cjtkr\") pod \"548f8a7f-3f38-498d-999a-96753854d869\" (UID: \"548f8a7f-3f38-498d-999a-96753854d869\") " Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.073161 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/548f8a7f-3f38-498d-999a-96753854d869-utilities" (OuterVolumeSpecName: "utilities") pod "548f8a7f-3f38-498d-999a-96753854d869" (UID: "548f8a7f-3f38-498d-999a-96753854d869"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.083409 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/548f8a7f-3f38-498d-999a-96753854d869-kube-api-access-cjtkr" (OuterVolumeSpecName: "kube-api-access-cjtkr") pod "548f8a7f-3f38-498d-999a-96753854d869" (UID: "548f8a7f-3f38-498d-999a-96753854d869"). InnerVolumeSpecName "kube-api-access-cjtkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.098312 4842 generic.go:334] "Generic (PLEG): container finished" podID="548f8a7f-3f38-498d-999a-96753854d869" containerID="a98d41c6f99ea7e40f7729326ae77423d9f4923ba69dc78c96d670c40dcc93b2" exitCode=0 Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.098388 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.098374 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x2pr" event={"ID":"548f8a7f-3f38-498d-999a-96753854d869","Type":"ContainerDied","Data":"a98d41c6f99ea7e40f7729326ae77423d9f4923ba69dc78c96d670c40dcc93b2"} Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.098480 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x2pr" event={"ID":"548f8a7f-3f38-498d-999a-96753854d869","Type":"ContainerDied","Data":"ce6dfceaa02df9a199ab688a09ffe666908265586305bd31da204ee7ec4758f8"} Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.098552 4842 scope.go:117] "RemoveContainer" containerID="a98d41c6f99ea7e40f7729326ae77423d9f4923ba69dc78c96d670c40dcc93b2" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.133529 4842 scope.go:117] "RemoveContainer" containerID="e66cbb71a2dedb397c7fc4b4685876616e2f695ee571fb00c881428409fca0e8" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.150035 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/548f8a7f-3f38-498d-999a-96753854d869-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "548f8a7f-3f38-498d-999a-96753854d869" (UID: "548f8a7f-3f38-498d-999a-96753854d869"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.161511 4842 scope.go:117] "RemoveContainer" containerID="9ef439958063543ffefcc622a6723ccbf9efbfd50a080c5c02fc8a278f317150" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.174268 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/548f8a7f-3f38-498d-999a-96753854d869-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.174310 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/548f8a7f-3f38-498d-999a-96753854d869-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.174324 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjtkr\" (UniqueName: \"kubernetes.io/projected/548f8a7f-3f38-498d-999a-96753854d869-kube-api-access-cjtkr\") on node \"crc\" DevicePath \"\"" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.185467 4842 scope.go:117] "RemoveContainer" containerID="a98d41c6f99ea7e40f7729326ae77423d9f4923ba69dc78c96d670c40dcc93b2" Feb 02 07:01:51 crc kubenswrapper[4842]: E0202 07:01:51.186328 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a98d41c6f99ea7e40f7729326ae77423d9f4923ba69dc78c96d670c40dcc93b2\": container with ID starting with a98d41c6f99ea7e40f7729326ae77423d9f4923ba69dc78c96d670c40dcc93b2 not found: ID does not exist" containerID="a98d41c6f99ea7e40f7729326ae77423d9f4923ba69dc78c96d670c40dcc93b2" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.186385 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a98d41c6f99ea7e40f7729326ae77423d9f4923ba69dc78c96d670c40dcc93b2"} err="failed to get container status \"a98d41c6f99ea7e40f7729326ae77423d9f4923ba69dc78c96d670c40dcc93b2\": rpc error: code = NotFound desc = could not find container \"a98d41c6f99ea7e40f7729326ae77423d9f4923ba69dc78c96d670c40dcc93b2\": container with ID starting with a98d41c6f99ea7e40f7729326ae77423d9f4923ba69dc78c96d670c40dcc93b2 not found: ID does not exist" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.186411 4842 scope.go:117] "RemoveContainer" containerID="e66cbb71a2dedb397c7fc4b4685876616e2f695ee571fb00c881428409fca0e8" Feb 02 07:01:51 crc kubenswrapper[4842]: E0202 07:01:51.186908 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e66cbb71a2dedb397c7fc4b4685876616e2f695ee571fb00c881428409fca0e8\": container with ID starting with e66cbb71a2dedb397c7fc4b4685876616e2f695ee571fb00c881428409fca0e8 not found: ID does not exist" containerID="e66cbb71a2dedb397c7fc4b4685876616e2f695ee571fb00c881428409fca0e8" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.186929 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e66cbb71a2dedb397c7fc4b4685876616e2f695ee571fb00c881428409fca0e8"} err="failed to get container status \"e66cbb71a2dedb397c7fc4b4685876616e2f695ee571fb00c881428409fca0e8\": rpc error: code = NotFound desc = could not find container \"e66cbb71a2dedb397c7fc4b4685876616e2f695ee571fb00c881428409fca0e8\": container with ID starting with e66cbb71a2dedb397c7fc4b4685876616e2f695ee571fb00c881428409fca0e8 not found: ID does not exist" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.186941 4842 scope.go:117] "RemoveContainer" containerID="9ef439958063543ffefcc622a6723ccbf9efbfd50a080c5c02fc8a278f317150" Feb 02 07:01:51 crc kubenswrapper[4842]: E0202 07:01:51.187398 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ef439958063543ffefcc622a6723ccbf9efbfd50a080c5c02fc8a278f317150\": container with ID starting with 9ef439958063543ffefcc622a6723ccbf9efbfd50a080c5c02fc8a278f317150 not found: ID does not exist" containerID="9ef439958063543ffefcc622a6723ccbf9efbfd50a080c5c02fc8a278f317150" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.187445 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ef439958063543ffefcc622a6723ccbf9efbfd50a080c5c02fc8a278f317150"} err="failed to get container status \"9ef439958063543ffefcc622a6723ccbf9efbfd50a080c5c02fc8a278f317150\": rpc error: code = NotFound desc = could not find container \"9ef439958063543ffefcc622a6723ccbf9efbfd50a080c5c02fc8a278f317150\": container with ID starting with 9ef439958063543ffefcc622a6723ccbf9efbfd50a080c5c02fc8a278f317150 not found: ID does not exist" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.445031 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9x2pr"] Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.445072 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9x2pr"] Feb 02 07:01:52 crc kubenswrapper[4842]: I0202 07:01:52.192341 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:01:52 crc kubenswrapper[4842]: I0202 07:01:52.192698 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:01:52 crc kubenswrapper[4842]: I0202 07:01:52.243577 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:01:53 crc kubenswrapper[4842]: I0202 07:01:53.167966 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:01:53 crc kubenswrapper[4842]: I0202 07:01:53.447050 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="548f8a7f-3f38-498d-999a-96753854d869" path="/var/lib/kubelet/pods/548f8a7f-3f38-498d-999a-96753854d869/volumes" Feb 02 07:01:54 crc kubenswrapper[4842]: I0202 07:01:54.884360 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-5549s"] Feb 02 07:01:54 crc kubenswrapper[4842]: E0202 07:01:54.885034 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="548f8a7f-3f38-498d-999a-96753854d869" containerName="extract-utilities" Feb 02 07:01:54 crc kubenswrapper[4842]: I0202 07:01:54.885056 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="548f8a7f-3f38-498d-999a-96753854d869" containerName="extract-utilities" Feb 02 07:01:54 crc kubenswrapper[4842]: E0202 07:01:54.885072 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="548f8a7f-3f38-498d-999a-96753854d869" containerName="extract-content" Feb 02 07:01:54 crc kubenswrapper[4842]: I0202 07:01:54.885086 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="548f8a7f-3f38-498d-999a-96753854d869" containerName="extract-content" Feb 02 07:01:54 crc kubenswrapper[4842]: E0202 07:01:54.885120 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="548f8a7f-3f38-498d-999a-96753854d869" containerName="registry-server" Feb 02 07:01:54 crc kubenswrapper[4842]: I0202 07:01:54.885137 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="548f8a7f-3f38-498d-999a-96753854d869" containerName="registry-server" Feb 02 07:01:54 crc kubenswrapper[4842]: I0202 07:01:54.885368 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="548f8a7f-3f38-498d-999a-96753854d869" containerName="registry-server" Feb 02 07:01:54 crc kubenswrapper[4842]: I0202 07:01:54.885989 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5549s" Feb 02 07:01:54 crc kubenswrapper[4842]: I0202 07:01:54.890661 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-kf99s" Feb 02 07:01:54 crc kubenswrapper[4842]: I0202 07:01:54.893307 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 02 07:01:54 crc kubenswrapper[4842]: I0202 07:01:54.894302 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 02 07:01:54 crc kubenswrapper[4842]: I0202 07:01:54.896002 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5549s"] Feb 02 07:01:55 crc kubenswrapper[4842]: I0202 07:01:55.074963 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49wgz\" (UniqueName: \"kubernetes.io/projected/e2e2a93a-9c50-4769-9983-e51f49c374d5-kube-api-access-49wgz\") pod \"openstack-operator-index-5549s\" (UID: \"e2e2a93a-9c50-4769-9983-e51f49c374d5\") " pod="openstack-operators/openstack-operator-index-5549s" Feb 02 07:01:55 crc kubenswrapper[4842]: I0202 07:01:55.177462 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49wgz\" (UniqueName: \"kubernetes.io/projected/e2e2a93a-9c50-4769-9983-e51f49c374d5-kube-api-access-49wgz\") pod \"openstack-operator-index-5549s\" (UID: \"e2e2a93a-9c50-4769-9983-e51f49c374d5\") " pod="openstack-operators/openstack-operator-index-5549s" Feb 02 07:01:55 crc kubenswrapper[4842]: I0202 07:01:55.202920 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:01:55 crc kubenswrapper[4842]: I0202 07:01:55.203000 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:01:55 crc kubenswrapper[4842]: I0202 07:01:55.212389 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49wgz\" (UniqueName: \"kubernetes.io/projected/e2e2a93a-9c50-4769-9983-e51f49c374d5-kube-api-access-49wgz\") pod \"openstack-operator-index-5549s\" (UID: \"e2e2a93a-9c50-4769-9983-e51f49c374d5\") " pod="openstack-operators/openstack-operator-index-5549s" Feb 02 07:01:55 crc kubenswrapper[4842]: I0202 07:01:55.256598 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5549s" Feb 02 07:01:55 crc kubenswrapper[4842]: I0202 07:01:55.279853 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:01:55 crc kubenswrapper[4842]: I0202 07:01:55.852975 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5549s"] Feb 02 07:01:56 crc kubenswrapper[4842]: I0202 07:01:56.138968 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5549s" event={"ID":"e2e2a93a-9c50-4769-9983-e51f49c374d5","Type":"ContainerStarted","Data":"6ebc9d493ba802278e6c55edff41e45d01f39c4caf4d74970fd717b7f0ed0959"} Feb 02 07:01:56 crc kubenswrapper[4842]: I0202 07:01:56.196691 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:01:57 crc kubenswrapper[4842]: I0202 07:01:57.149886 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5549s" event={"ID":"e2e2a93a-9c50-4769-9983-e51f49c374d5","Type":"ContainerStarted","Data":"a4054be2e6e6ef664dba1de9f7b1dfddf7e3cc36663ab73d6a99d202958ffae2"} Feb 02 07:02:00 crc kubenswrapper[4842]: I0202 07:02:00.071956 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-5549s" podStartSLOduration=5.373697913 podStartE2EDuration="6.071931043s" podCreationTimestamp="2026-02-02 07:01:54 +0000 UTC" firstStartedPulling="2026-02-02 07:01:55.851670645 +0000 UTC m=+941.228938557" lastFinishedPulling="2026-02-02 07:01:56.549903775 +0000 UTC m=+941.927171687" observedRunningTime="2026-02-02 07:01:57.171930628 +0000 UTC m=+942.549198610" watchObservedRunningTime="2026-02-02 07:02:00.071931043 +0000 UTC m=+945.449198985" Feb 02 07:02:00 crc kubenswrapper[4842]: I0202 07:02:00.073472 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ltrf2"] Feb 02 07:02:00 crc kubenswrapper[4842]: I0202 07:02:00.074007 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ltrf2" podUID="80cf1b43-3437-4ef8-b9c7-a8bd77270228" containerName="registry-server" containerID="cri-o://18aeb459fdeac67d76d40df4822fb79462c6686bd06d747776d24de4f55ddec6" gracePeriod=2 Feb 02 07:02:00 crc kubenswrapper[4842]: I0202 07:02:00.473606 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xqgvd"] Feb 02 07:02:00 crc kubenswrapper[4842]: I0202 07:02:00.475002 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xqgvd" podUID="0958b9f3-ea26-4013-9a68-3cf94fa2b557" containerName="registry-server" containerID="cri-o://4192676feafbcbc6ca121e46aa534c0ceaaf73d1dd6f36b6528914037c4f83bf" gracePeriod=2 Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.208933 4842 generic.go:334] "Generic (PLEG): container finished" podID="80cf1b43-3437-4ef8-b9c7-a8bd77270228" containerID="18aeb459fdeac67d76d40df4822fb79462c6686bd06d747776d24de4f55ddec6" exitCode=0 Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.209003 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltrf2" event={"ID":"80cf1b43-3437-4ef8-b9c7-a8bd77270228","Type":"ContainerDied","Data":"18aeb459fdeac67d76d40df4822fb79462c6686bd06d747776d24de4f55ddec6"} Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.211654 4842 generic.go:334] "Generic (PLEG): container finished" podID="0958b9f3-ea26-4013-9a68-3cf94fa2b557" containerID="4192676feafbcbc6ca121e46aa534c0ceaaf73d1dd6f36b6528914037c4f83bf" exitCode=0 Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.211686 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xqgvd" event={"ID":"0958b9f3-ea26-4013-9a68-3cf94fa2b557","Type":"ContainerDied","Data":"4192676feafbcbc6ca121e46aa534c0ceaaf73d1dd6f36b6528914037c4f83bf"} Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.318114 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.322015 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.409155 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdpzj\" (UniqueName: \"kubernetes.io/projected/0958b9f3-ea26-4013-9a68-3cf94fa2b557-kube-api-access-vdpzj\") pod \"0958b9f3-ea26-4013-9a68-3cf94fa2b557\" (UID: \"0958b9f3-ea26-4013-9a68-3cf94fa2b557\") " Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.409264 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80cf1b43-3437-4ef8-b9c7-a8bd77270228-catalog-content\") pod \"80cf1b43-3437-4ef8-b9c7-a8bd77270228\" (UID: \"80cf1b43-3437-4ef8-b9c7-a8bd77270228\") " Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.415297 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0958b9f3-ea26-4013-9a68-3cf94fa2b557-kube-api-access-vdpzj" (OuterVolumeSpecName: "kube-api-access-vdpzj") pod "0958b9f3-ea26-4013-9a68-3cf94fa2b557" (UID: "0958b9f3-ea26-4013-9a68-3cf94fa2b557"). InnerVolumeSpecName "kube-api-access-vdpzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.481055 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80cf1b43-3437-4ef8-b9c7-a8bd77270228-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80cf1b43-3437-4ef8-b9c7-a8bd77270228" (UID: "80cf1b43-3437-4ef8-b9c7-a8bd77270228"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.515640 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0958b9f3-ea26-4013-9a68-3cf94fa2b557-catalog-content\") pod \"0958b9f3-ea26-4013-9a68-3cf94fa2b557\" (UID: \"0958b9f3-ea26-4013-9a68-3cf94fa2b557\") " Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.515740 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0958b9f3-ea26-4013-9a68-3cf94fa2b557-utilities\") pod \"0958b9f3-ea26-4013-9a68-3cf94fa2b557\" (UID: \"0958b9f3-ea26-4013-9a68-3cf94fa2b557\") " Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.515776 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80cf1b43-3437-4ef8-b9c7-a8bd77270228-utilities\") pod \"80cf1b43-3437-4ef8-b9c7-a8bd77270228\" (UID: \"80cf1b43-3437-4ef8-b9c7-a8bd77270228\") " Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.515803 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7trm\" (UniqueName: \"kubernetes.io/projected/80cf1b43-3437-4ef8-b9c7-a8bd77270228-kube-api-access-l7trm\") pod \"80cf1b43-3437-4ef8-b9c7-a8bd77270228\" (UID: \"80cf1b43-3437-4ef8-b9c7-a8bd77270228\") " Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.516314 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdpzj\" (UniqueName: \"kubernetes.io/projected/0958b9f3-ea26-4013-9a68-3cf94fa2b557-kube-api-access-vdpzj\") on node \"crc\" DevicePath \"\"" Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.516339 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80cf1b43-3437-4ef8-b9c7-a8bd77270228-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.516626 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0958b9f3-ea26-4013-9a68-3cf94fa2b557-utilities" (OuterVolumeSpecName: "utilities") pod "0958b9f3-ea26-4013-9a68-3cf94fa2b557" (UID: "0958b9f3-ea26-4013-9a68-3cf94fa2b557"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.517914 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80cf1b43-3437-4ef8-b9c7-a8bd77270228-utilities" (OuterVolumeSpecName: "utilities") pod "80cf1b43-3437-4ef8-b9c7-a8bd77270228" (UID: "80cf1b43-3437-4ef8-b9c7-a8bd77270228"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.522888 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80cf1b43-3437-4ef8-b9c7-a8bd77270228-kube-api-access-l7trm" (OuterVolumeSpecName: "kube-api-access-l7trm") pod "80cf1b43-3437-4ef8-b9c7-a8bd77270228" (UID: "80cf1b43-3437-4ef8-b9c7-a8bd77270228"). InnerVolumeSpecName "kube-api-access-l7trm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.536501 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0958b9f3-ea26-4013-9a68-3cf94fa2b557-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0958b9f3-ea26-4013-9a68-3cf94fa2b557" (UID: "0958b9f3-ea26-4013-9a68-3cf94fa2b557"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.617376 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80cf1b43-3437-4ef8-b9c7-a8bd77270228-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.617433 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7trm\" (UniqueName: \"kubernetes.io/projected/80cf1b43-3437-4ef8-b9c7-a8bd77270228-kube-api-access-l7trm\") on node \"crc\" DevicePath \"\"" Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.617458 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0958b9f3-ea26-4013-9a68-3cf94fa2b557-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.617476 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0958b9f3-ea26-4013-9a68-3cf94fa2b557-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:02:02 crc kubenswrapper[4842]: I0202 07:02:02.224467 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xqgvd" event={"ID":"0958b9f3-ea26-4013-9a68-3cf94fa2b557","Type":"ContainerDied","Data":"e9e4e405947ab052bd7a0b77475b906abce643a4251b0c881206046268bc25b4"} Feb 02 07:02:02 crc kubenswrapper[4842]: I0202 07:02:02.224545 4842 scope.go:117] "RemoveContainer" containerID="4192676feafbcbc6ca121e46aa534c0ceaaf73d1dd6f36b6528914037c4f83bf" Feb 02 07:02:02 crc kubenswrapper[4842]: I0202 07:02:02.224718 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:02:02 crc kubenswrapper[4842]: I0202 07:02:02.233169 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltrf2" event={"ID":"80cf1b43-3437-4ef8-b9c7-a8bd77270228","Type":"ContainerDied","Data":"10d6bb6305708264d17e0f259712182618a08b4e23d2fdb9d6c3dec64e76c9e2"} Feb 02 07:02:02 crc kubenswrapper[4842]: I0202 07:02:02.233275 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:02:02 crc kubenswrapper[4842]: I0202 07:02:02.264591 4842 scope.go:117] "RemoveContainer" containerID="6562c4eba25712b89a5e8c0ada8a664aed0995c58fefc7d0c3c227145bba8a32" Feb 02 07:02:02 crc kubenswrapper[4842]: I0202 07:02:02.292810 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xqgvd"] Feb 02 07:02:02 crc kubenswrapper[4842]: I0202 07:02:02.304461 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xqgvd"] Feb 02 07:02:02 crc kubenswrapper[4842]: I0202 07:02:02.313621 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ltrf2"] Feb 02 07:02:02 crc kubenswrapper[4842]: I0202 07:02:02.316613 4842 scope.go:117] "RemoveContainer" containerID="9a1b5c3686f5d2f888d619760c2c6f065e2cfcbdb7a7c316780928bdc983a404" Feb 02 07:02:02 crc kubenswrapper[4842]: I0202 07:02:02.321451 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ltrf2"] Feb 02 07:02:02 crc kubenswrapper[4842]: I0202 07:02:02.341542 4842 scope.go:117] "RemoveContainer" containerID="18aeb459fdeac67d76d40df4822fb79462c6686bd06d747776d24de4f55ddec6" Feb 02 07:02:02 crc kubenswrapper[4842]: I0202 07:02:02.369041 4842 scope.go:117] "RemoveContainer" containerID="5aaf954be58c33d0b0d73bce7116e84abb016b1ce966f94de9fa66d4258dc108" Feb 02 07:02:02 crc kubenswrapper[4842]: I0202 07:02:02.392440 4842 scope.go:117] "RemoveContainer" containerID="331609fa40669bd7840b308a9666007c56af8aa738cc0b311b0bd226734f37d3" Feb 02 07:02:03 crc kubenswrapper[4842]: I0202 07:02:03.448029 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0958b9f3-ea26-4013-9a68-3cf94fa2b557" path="/var/lib/kubelet/pods/0958b9f3-ea26-4013-9a68-3cf94fa2b557/volumes" Feb 02 07:02:03 crc kubenswrapper[4842]: I0202 07:02:03.450675 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80cf1b43-3437-4ef8-b9c7-a8bd77270228" path="/var/lib/kubelet/pods/80cf1b43-3437-4ef8-b9c7-a8bd77270228/volumes" Feb 02 07:02:05 crc kubenswrapper[4842]: I0202 07:02:05.257128 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-5549s" Feb 02 07:02:05 crc kubenswrapper[4842]: I0202 07:02:05.257200 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-5549s" Feb 02 07:02:05 crc kubenswrapper[4842]: I0202 07:02:05.313710 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-5549s" Feb 02 07:02:06 crc kubenswrapper[4842]: I0202 07:02:06.316715 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-5549s" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.344979 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr"] Feb 02 07:02:10 crc kubenswrapper[4842]: E0202 07:02:10.345967 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80cf1b43-3437-4ef8-b9c7-a8bd77270228" containerName="extract-utilities" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.345996 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="80cf1b43-3437-4ef8-b9c7-a8bd77270228" containerName="extract-utilities" Feb 02 07:02:10 crc kubenswrapper[4842]: E0202 07:02:10.346014 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0958b9f3-ea26-4013-9a68-3cf94fa2b557" containerName="registry-server" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.346027 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0958b9f3-ea26-4013-9a68-3cf94fa2b557" containerName="registry-server" Feb 02 07:02:10 crc kubenswrapper[4842]: E0202 07:02:10.346048 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0958b9f3-ea26-4013-9a68-3cf94fa2b557" containerName="extract-content" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.346062 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0958b9f3-ea26-4013-9a68-3cf94fa2b557" containerName="extract-content" Feb 02 07:02:10 crc kubenswrapper[4842]: E0202 07:02:10.346080 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80cf1b43-3437-4ef8-b9c7-a8bd77270228" containerName="registry-server" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.346121 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="80cf1b43-3437-4ef8-b9c7-a8bd77270228" containerName="registry-server" Feb 02 07:02:10 crc kubenswrapper[4842]: E0202 07:02:10.346144 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80cf1b43-3437-4ef8-b9c7-a8bd77270228" containerName="extract-content" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.346158 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="80cf1b43-3437-4ef8-b9c7-a8bd77270228" containerName="extract-content" Feb 02 07:02:10 crc kubenswrapper[4842]: E0202 07:02:10.346186 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0958b9f3-ea26-4013-9a68-3cf94fa2b557" containerName="extract-utilities" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.346200 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0958b9f3-ea26-4013-9a68-3cf94fa2b557" containerName="extract-utilities" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.346448 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="0958b9f3-ea26-4013-9a68-3cf94fa2b557" containerName="registry-server" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.346469 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="80cf1b43-3437-4ef8-b9c7-a8bd77270228" containerName="registry-server" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.347987 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.351509 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-pxhx2" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.364708 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr"] Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.460319 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-util\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr\" (UID: \"3d9034b5-b9d6-4e70-8cae-f6226cd41d78\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.460441 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-bundle\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr\" (UID: \"3d9034b5-b9d6-4e70-8cae-f6226cd41d78\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.460491 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxwpp\" (UniqueName: \"kubernetes.io/projected/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-kube-api-access-dxwpp\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr\" (UID: \"3d9034b5-b9d6-4e70-8cae-f6226cd41d78\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.561837 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-util\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr\" (UID: \"3d9034b5-b9d6-4e70-8cae-f6226cd41d78\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.561919 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-bundle\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr\" (UID: \"3d9034b5-b9d6-4e70-8cae-f6226cd41d78\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.561942 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxwpp\" (UniqueName: \"kubernetes.io/projected/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-kube-api-access-dxwpp\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr\" (UID: \"3d9034b5-b9d6-4e70-8cae-f6226cd41d78\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.562677 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-bundle\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr\" (UID: \"3d9034b5-b9d6-4e70-8cae-f6226cd41d78\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.562963 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-util\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr\" (UID: \"3d9034b5-b9d6-4e70-8cae-f6226cd41d78\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.580978 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxwpp\" (UniqueName: \"kubernetes.io/projected/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-kube-api-access-dxwpp\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr\" (UID: \"3d9034b5-b9d6-4e70-8cae-f6226cd41d78\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.680057 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.915080 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr"] Feb 02 07:02:11 crc kubenswrapper[4842]: I0202 07:02:11.329976 4842 generic.go:334] "Generic (PLEG): container finished" podID="3d9034b5-b9d6-4e70-8cae-f6226cd41d78" containerID="1a5f35b5a4eb71f9bb5798da2dcdf06862b34028bed5081306f93a56b70bc26e" exitCode=0 Feb 02 07:02:11 crc kubenswrapper[4842]: I0202 07:02:11.330016 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" event={"ID":"3d9034b5-b9d6-4e70-8cae-f6226cd41d78","Type":"ContainerDied","Data":"1a5f35b5a4eb71f9bb5798da2dcdf06862b34028bed5081306f93a56b70bc26e"} Feb 02 07:02:11 crc kubenswrapper[4842]: I0202 07:02:11.330043 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" event={"ID":"3d9034b5-b9d6-4e70-8cae-f6226cd41d78","Type":"ContainerStarted","Data":"429c116ca0225b38ad58e782d7cbf54cac7094f15ae8eb654edf041be3e18bed"} Feb 02 07:02:12 crc kubenswrapper[4842]: I0202 07:02:12.345836 4842 generic.go:334] "Generic (PLEG): container finished" podID="3d9034b5-b9d6-4e70-8cae-f6226cd41d78" containerID="cd6c9e561c6952477b245b41b2a0ba4090b60c5bf07255d24ef3c826cb541957" exitCode=0 Feb 02 07:02:12 crc kubenswrapper[4842]: I0202 07:02:12.345906 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" event={"ID":"3d9034b5-b9d6-4e70-8cae-f6226cd41d78","Type":"ContainerDied","Data":"cd6c9e561c6952477b245b41b2a0ba4090b60c5bf07255d24ef3c826cb541957"} Feb 02 07:02:13 crc kubenswrapper[4842]: I0202 07:02:13.359203 4842 generic.go:334] "Generic (PLEG): container finished" podID="3d9034b5-b9d6-4e70-8cae-f6226cd41d78" containerID="fae69b26f8c1f3300dec7ceb2b1e84f680325e0e32c2d237f63fd4132afa4921" exitCode=0 Feb 02 07:02:13 crc kubenswrapper[4842]: I0202 07:02:13.359374 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" event={"ID":"3d9034b5-b9d6-4e70-8cae-f6226cd41d78","Type":"ContainerDied","Data":"fae69b26f8c1f3300dec7ceb2b1e84f680325e0e32c2d237f63fd4132afa4921"} Feb 02 07:02:14 crc kubenswrapper[4842]: I0202 07:02:14.733085 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" Feb 02 07:02:14 crc kubenswrapper[4842]: I0202 07:02:14.829109 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxwpp\" (UniqueName: \"kubernetes.io/projected/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-kube-api-access-dxwpp\") pod \"3d9034b5-b9d6-4e70-8cae-f6226cd41d78\" (UID: \"3d9034b5-b9d6-4e70-8cae-f6226cd41d78\") " Feb 02 07:02:14 crc kubenswrapper[4842]: I0202 07:02:14.829318 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-bundle\") pod \"3d9034b5-b9d6-4e70-8cae-f6226cd41d78\" (UID: \"3d9034b5-b9d6-4e70-8cae-f6226cd41d78\") " Feb 02 07:02:14 crc kubenswrapper[4842]: I0202 07:02:14.829374 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-util\") pod \"3d9034b5-b9d6-4e70-8cae-f6226cd41d78\" (UID: \"3d9034b5-b9d6-4e70-8cae-f6226cd41d78\") " Feb 02 07:02:14 crc kubenswrapper[4842]: I0202 07:02:14.830093 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-bundle" (OuterVolumeSpecName: "bundle") pod "3d9034b5-b9d6-4e70-8cae-f6226cd41d78" (UID: "3d9034b5-b9d6-4e70-8cae-f6226cd41d78"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:02:14 crc kubenswrapper[4842]: I0202 07:02:14.835123 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-kube-api-access-dxwpp" (OuterVolumeSpecName: "kube-api-access-dxwpp") pod "3d9034b5-b9d6-4e70-8cae-f6226cd41d78" (UID: "3d9034b5-b9d6-4e70-8cae-f6226cd41d78"). InnerVolumeSpecName "kube-api-access-dxwpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:02:14 crc kubenswrapper[4842]: I0202 07:02:14.842822 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-util" (OuterVolumeSpecName: "util") pod "3d9034b5-b9d6-4e70-8cae-f6226cd41d78" (UID: "3d9034b5-b9d6-4e70-8cae-f6226cd41d78"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:02:14 crc kubenswrapper[4842]: I0202 07:02:14.930664 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxwpp\" (UniqueName: \"kubernetes.io/projected/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-kube-api-access-dxwpp\") on node \"crc\" DevicePath \"\"" Feb 02 07:02:14 crc kubenswrapper[4842]: I0202 07:02:14.930713 4842 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:02:14 crc kubenswrapper[4842]: I0202 07:02:14.930731 4842 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-util\") on node \"crc\" DevicePath \"\"" Feb 02 07:02:15 crc kubenswrapper[4842]: I0202 07:02:15.381950 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" event={"ID":"3d9034b5-b9d6-4e70-8cae-f6226cd41d78","Type":"ContainerDied","Data":"429c116ca0225b38ad58e782d7cbf54cac7094f15ae8eb654edf041be3e18bed"} Feb 02 07:02:15 crc kubenswrapper[4842]: I0202 07:02:15.382823 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="429c116ca0225b38ad58e782d7cbf54cac7094f15ae8eb654edf041be3e18bed" Feb 02 07:02:15 crc kubenswrapper[4842]: I0202 07:02:15.382166 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" Feb 02 07:02:18 crc kubenswrapper[4842]: I0202 07:02:18.805968 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-757f46c65d-gfksg"] Feb 02 07:02:18 crc kubenswrapper[4842]: E0202 07:02:18.806743 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9034b5-b9d6-4e70-8cae-f6226cd41d78" containerName="extract" Feb 02 07:02:18 crc kubenswrapper[4842]: I0202 07:02:18.806765 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9034b5-b9d6-4e70-8cae-f6226cd41d78" containerName="extract" Feb 02 07:02:18 crc kubenswrapper[4842]: E0202 07:02:18.806786 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9034b5-b9d6-4e70-8cae-f6226cd41d78" containerName="pull" Feb 02 07:02:18 crc kubenswrapper[4842]: I0202 07:02:18.806799 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9034b5-b9d6-4e70-8cae-f6226cd41d78" containerName="pull" Feb 02 07:02:18 crc kubenswrapper[4842]: E0202 07:02:18.806832 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9034b5-b9d6-4e70-8cae-f6226cd41d78" containerName="util" Feb 02 07:02:18 crc kubenswrapper[4842]: I0202 07:02:18.806845 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9034b5-b9d6-4e70-8cae-f6226cd41d78" containerName="util" Feb 02 07:02:18 crc kubenswrapper[4842]: I0202 07:02:18.807119 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d9034b5-b9d6-4e70-8cae-f6226cd41d78" containerName="extract" Feb 02 07:02:18 crc kubenswrapper[4842]: I0202 07:02:18.807924 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-gfksg" Feb 02 07:02:18 crc kubenswrapper[4842]: I0202 07:02:18.809881 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-mhlnc" Feb 02 07:02:18 crc kubenswrapper[4842]: I0202 07:02:18.842671 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-757f46c65d-gfksg"] Feb 02 07:02:19 crc kubenswrapper[4842]: I0202 07:02:19.003447 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2vf7\" (UniqueName: \"kubernetes.io/projected/3081c94c-e2f4-48b5-90b5-8bcc58234a9b-kube-api-access-q2vf7\") pod \"openstack-operator-controller-init-757f46c65d-gfksg\" (UID: \"3081c94c-e2f4-48b5-90b5-8bcc58234a9b\") " pod="openstack-operators/openstack-operator-controller-init-757f46c65d-gfksg" Feb 02 07:02:19 crc kubenswrapper[4842]: I0202 07:02:19.104539 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2vf7\" (UniqueName: \"kubernetes.io/projected/3081c94c-e2f4-48b5-90b5-8bcc58234a9b-kube-api-access-q2vf7\") pod \"openstack-operator-controller-init-757f46c65d-gfksg\" (UID: \"3081c94c-e2f4-48b5-90b5-8bcc58234a9b\") " pod="openstack-operators/openstack-operator-controller-init-757f46c65d-gfksg" Feb 02 07:02:19 crc kubenswrapper[4842]: I0202 07:02:19.126187 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2vf7\" (UniqueName: \"kubernetes.io/projected/3081c94c-e2f4-48b5-90b5-8bcc58234a9b-kube-api-access-q2vf7\") pod \"openstack-operator-controller-init-757f46c65d-gfksg\" (UID: \"3081c94c-e2f4-48b5-90b5-8bcc58234a9b\") " pod="openstack-operators/openstack-operator-controller-init-757f46c65d-gfksg" Feb 02 07:02:19 crc kubenswrapper[4842]: I0202 07:02:19.128622 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-gfksg" Feb 02 07:02:19 crc kubenswrapper[4842]: I0202 07:02:19.610931 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-757f46c65d-gfksg"] Feb 02 07:02:19 crc kubenswrapper[4842]: W0202 07:02:19.612680 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3081c94c_e2f4_48b5_90b5_8bcc58234a9b.slice/crio-175006f03505702041d9ab7483f6cfb54f80aeff903a34fc50d292d29a305e15 WatchSource:0}: Error finding container 175006f03505702041d9ab7483f6cfb54f80aeff903a34fc50d292d29a305e15: Status 404 returned error can't find the container with id 175006f03505702041d9ab7483f6cfb54f80aeff903a34fc50d292d29a305e15 Feb 02 07:02:20 crc kubenswrapper[4842]: I0202 07:02:20.411080 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-gfksg" event={"ID":"3081c94c-e2f4-48b5-90b5-8bcc58234a9b","Type":"ContainerStarted","Data":"175006f03505702041d9ab7483f6cfb54f80aeff903a34fc50d292d29a305e15"} Feb 02 07:02:24 crc kubenswrapper[4842]: I0202 07:02:24.439669 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-gfksg" event={"ID":"3081c94c-e2f4-48b5-90b5-8bcc58234a9b","Type":"ContainerStarted","Data":"04da284eb78ef4e13742d90c23b7cae9c13bd64706fe2394cba8a2940b9fdb88"} Feb 02 07:02:24 crc kubenswrapper[4842]: I0202 07:02:24.440060 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-gfksg" Feb 02 07:02:24 crc kubenswrapper[4842]: I0202 07:02:24.483188 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-gfksg" podStartSLOduration=2.285298463 podStartE2EDuration="6.48315805s" podCreationTimestamp="2026-02-02 07:02:18 +0000 UTC" firstStartedPulling="2026-02-02 07:02:19.614816258 +0000 UTC m=+964.992084180" lastFinishedPulling="2026-02-02 07:02:23.812675815 +0000 UTC m=+969.189943767" observedRunningTime="2026-02-02 07:02:24.470942009 +0000 UTC m=+969.848209941" watchObservedRunningTime="2026-02-02 07:02:24.48315805 +0000 UTC m=+969.860426002" Feb 02 07:02:29 crc kubenswrapper[4842]: I0202 07:02:29.132927 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-gfksg" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.714678 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-stkw6"] Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.715906 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-stkw6" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.718597 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-nv75v" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.720305 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-jknjh"] Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.721300 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jknjh" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.725273 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-r8bgn" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.726819 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-stkw6"] Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.761724 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-xq5nz"] Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.762397 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-xq5nz" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.765615 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-p42lx" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.774177 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-4hrlz"] Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.774882 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4hrlz" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.779608 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-prsht" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.780649 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-xq5nz"] Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.789526 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-jknjh"] Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.792935 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-96sfj"] Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.793905 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-96sfj" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.796371 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-x95cv" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.816250 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-4hrlz"] Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.829272 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-96sfj"] Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.839274 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-skdgw"] Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.839991 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-skdgw" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.845431 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-6g46d" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.858994 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-skdgw"] Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.894796 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67mrk\" (UniqueName: \"kubernetes.io/projected/79c1d3d0-ca85-4bbf-a7a7-74d260b5d4b1-kube-api-access-67mrk\") pod \"cinder-operator-controller-manager-8d874c8fc-jknjh\" (UID: \"79c1d3d0-ca85-4bbf-a7a7-74d260b5d4b1\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jknjh" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.894829 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vsgg\" (UniqueName: \"kubernetes.io/projected/bda41d33-cd37-4c4d-99d6-3808993000b4-kube-api-access-2vsgg\") pod \"designate-operator-controller-manager-6d9697b7f4-4hrlz\" (UID: \"bda41d33-cd37-4c4d-99d6-3808993000b4\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4hrlz" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.894855 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8z8b\" (UniqueName: \"kubernetes.io/projected/bd7497e1-afb6-44b5-8270-1021f837a65a-kube-api-access-m8z8b\") pod \"glance-operator-controller-manager-8886f4c47-xq5nz\" (UID: \"bd7497e1-afb6-44b5-8270-1021f837a65a\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-xq5nz" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.894882 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvv9x\" (UniqueName: \"kubernetes.io/projected/17af9a3f-7823-4340-bebc-e50e11807467-kube-api-access-mvv9x\") pod \"heat-operator-controller-manager-69d6db494d-96sfj\" (UID: \"17af9a3f-7823-4340-bebc-e50e11807467\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-96sfj" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.894903 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgtrk\" (UniqueName: \"kubernetes.io/projected/c679df42-e383-4a11-a50d-af9dbd4c4eb0-kube-api-access-sgtrk\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-stkw6\" (UID: \"c679df42-e383-4a11-a50d-af9dbd4c4eb0\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-stkw6" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.906305 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw"] Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.906998 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.911483 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-867vq" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.911607 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.935917 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-jmvqq"] Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.936878 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-jmvqq" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.951827 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-pmtlr" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.000786 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.002504 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67mrk\" (UniqueName: \"kubernetes.io/projected/79c1d3d0-ca85-4bbf-a7a7-74d260b5d4b1-kube-api-access-67mrk\") pod \"cinder-operator-controller-manager-8d874c8fc-jknjh\" (UID: \"79c1d3d0-ca85-4bbf-a7a7-74d260b5d4b1\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jknjh" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.002664 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vsgg\" (UniqueName: \"kubernetes.io/projected/bda41d33-cd37-4c4d-99d6-3808993000b4-kube-api-access-2vsgg\") pod \"designate-operator-controller-manager-6d9697b7f4-4hrlz\" (UID: \"bda41d33-cd37-4c4d-99d6-3808993000b4\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4hrlz" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.002702 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8z8b\" (UniqueName: \"kubernetes.io/projected/bd7497e1-afb6-44b5-8270-1021f837a65a-kube-api-access-m8z8b\") pod \"glance-operator-controller-manager-8886f4c47-xq5nz\" (UID: \"bd7497e1-afb6-44b5-8270-1021f837a65a\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-xq5nz" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.002732 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert\") pod \"infra-operator-controller-manager-79955696d6-b9qjw\" (UID: \"a020d6c0-e749-4442-93e8-64a4c463e9d5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.002764 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvv9x\" (UniqueName: \"kubernetes.io/projected/17af9a3f-7823-4340-bebc-e50e11807467-kube-api-access-mvv9x\") pod \"heat-operator-controller-manager-69d6db494d-96sfj\" (UID: \"17af9a3f-7823-4340-bebc-e50e11807467\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-96sfj" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.002786 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29wq7\" (UniqueName: \"kubernetes.io/projected/95850a5b-9e70-4f77-86ee-ff016eae6e7e-kube-api-access-29wq7\") pod \"horizon-operator-controller-manager-5fb775575f-skdgw\" (UID: \"95850a5b-9e70-4f77-86ee-ff016eae6e7e\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-skdgw" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.002814 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgtrk\" (UniqueName: \"kubernetes.io/projected/c679df42-e383-4a11-a50d-af9dbd4c4eb0-kube-api-access-sgtrk\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-stkw6\" (UID: \"c679df42-e383-4a11-a50d-af9dbd4c4eb0\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-stkw6" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.002836 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7q94\" (UniqueName: \"kubernetes.io/projected/a020d6c0-e749-4442-93e8-64a4c463e9d5-kube-api-access-m7q94\") pod \"infra-operator-controller-manager-79955696d6-b9qjw\" (UID: \"a020d6c0-e749-4442-93e8-64a4c463e9d5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.039295 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-jmvqq"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.049414 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-kz2zn"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.050105 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-kz2zn" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.054154 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-m2vh5" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.054683 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vsgg\" (UniqueName: \"kubernetes.io/projected/bda41d33-cd37-4c4d-99d6-3808993000b4-kube-api-access-2vsgg\") pod \"designate-operator-controller-manager-6d9697b7f4-4hrlz\" (UID: \"bda41d33-cd37-4c4d-99d6-3808993000b4\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4hrlz" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.067074 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvv9x\" (UniqueName: \"kubernetes.io/projected/17af9a3f-7823-4340-bebc-e50e11807467-kube-api-access-mvv9x\") pod \"heat-operator-controller-manager-69d6db494d-96sfj\" (UID: \"17af9a3f-7823-4340-bebc-e50e11807467\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-96sfj" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.068872 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8z8b\" (UniqueName: \"kubernetes.io/projected/bd7497e1-afb6-44b5-8270-1021f837a65a-kube-api-access-m8z8b\") pod \"glance-operator-controller-manager-8886f4c47-xq5nz\" (UID: \"bd7497e1-afb6-44b5-8270-1021f837a65a\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-xq5nz" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.071423 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgtrk\" (UniqueName: \"kubernetes.io/projected/c679df42-e383-4a11-a50d-af9dbd4c4eb0-kube-api-access-sgtrk\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-stkw6\" (UID: \"c679df42-e383-4a11-a50d-af9dbd4c4eb0\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-stkw6" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.071727 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-nzz4p"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.072648 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-nzz4p" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.081014 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-gqnlp" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.083016 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-nzz4p"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.084650 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67mrk\" (UniqueName: \"kubernetes.io/projected/79c1d3d0-ca85-4bbf-a7a7-74d260b5d4b1-kube-api-access-67mrk\") pod \"cinder-operator-controller-manager-8d874c8fc-jknjh\" (UID: \"79c1d3d0-ca85-4bbf-a7a7-74d260b5d4b1\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jknjh" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.090399 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-xq5nz" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.094630 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4hrlz" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.101274 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-kz2zn"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.103870 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert\") pod \"infra-operator-controller-manager-79955696d6-b9qjw\" (UID: \"a020d6c0-e749-4442-93e8-64a4c463e9d5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.103915 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk5nf\" (UniqueName: \"kubernetes.io/projected/0222c7fe-6311-4445-bf7f-e43fcb5ec5f9-kube-api-access-xk5nf\") pod \"ironic-operator-controller-manager-5f4b8bd54d-jmvqq\" (UID: \"0222c7fe-6311-4445-bf7f-e43fcb5ec5f9\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-jmvqq" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.103951 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29wq7\" (UniqueName: \"kubernetes.io/projected/95850a5b-9e70-4f77-86ee-ff016eae6e7e-kube-api-access-29wq7\") pod \"horizon-operator-controller-manager-5fb775575f-skdgw\" (UID: \"95850a5b-9e70-4f77-86ee-ff016eae6e7e\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-skdgw" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.103979 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7q94\" (UniqueName: \"kubernetes.io/projected/a020d6c0-e749-4442-93e8-64a4c463e9d5-kube-api-access-m7q94\") pod \"infra-operator-controller-manager-79955696d6-b9qjw\" (UID: \"a020d6c0-e749-4442-93e8-64a4c463e9d5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" Feb 02 07:02:49 crc kubenswrapper[4842]: E0202 07:02:49.104332 4842 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 07:02:49 crc kubenswrapper[4842]: E0202 07:02:49.104375 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert podName:a020d6c0-e749-4442-93e8-64a4c463e9d5 nodeName:}" failed. No retries permitted until 2026-02-02 07:02:49.604360771 +0000 UTC m=+994.981628683 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert") pod "infra-operator-controller-manager-79955696d6-b9qjw" (UID: "a020d6c0-e749-4442-93e8-64a4c463e9d5") : secret "infra-operator-webhook-server-cert" not found Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.112383 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-nsf9v"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.113055 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nsf9v" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.123420 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-qxp58" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.131974 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-96sfj" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.152953 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7q94\" (UniqueName: \"kubernetes.io/projected/a020d6c0-e749-4442-93e8-64a4c463e9d5-kube-api-access-m7q94\") pod \"infra-operator-controller-manager-79955696d6-b9qjw\" (UID: \"a020d6c0-e749-4442-93e8-64a4c463e9d5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.167831 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29wq7\" (UniqueName: \"kubernetes.io/projected/95850a5b-9e70-4f77-86ee-ff016eae6e7e-kube-api-access-29wq7\") pod \"horizon-operator-controller-manager-5fb775575f-skdgw\" (UID: \"95850a5b-9e70-4f77-86ee-ff016eae6e7e\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-skdgw" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.192969 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-nsf9v"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.207167 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8zw5\" (UniqueName: \"kubernetes.io/projected/bfe64bf6-fea9-4b04-b4ff-74fe4b9c2ece-kube-api-access-q8zw5\") pod \"mariadb-operator-controller-manager-67bf948998-nsf9v\" (UID: \"bfe64bf6-fea9-4b04-b4ff-74fe4b9c2ece\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nsf9v" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.207235 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdpzs\" (UniqueName: \"kubernetes.io/projected/590654af-c639-4e9d-b821-c6caa1016695-kube-api-access-jdpzs\") pod \"manila-operator-controller-manager-7dd968899f-kz2zn\" (UID: \"590654af-c639-4e9d-b821-c6caa1016695\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-kz2zn" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.207279 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk5nf\" (UniqueName: \"kubernetes.io/projected/0222c7fe-6311-4445-bf7f-e43fcb5ec5f9-kube-api-access-xk5nf\") pod \"ironic-operator-controller-manager-5f4b8bd54d-jmvqq\" (UID: \"0222c7fe-6311-4445-bf7f-e43fcb5ec5f9\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-jmvqq" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.207305 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j4bz\" (UniqueName: \"kubernetes.io/projected/46313c01-1f03-4185-b7c4-2da5420bd703-kube-api-access-8j4bz\") pod \"keystone-operator-controller-manager-84f48565d4-nzz4p\" (UID: \"46313c01-1f03-4185-b7c4-2da5420bd703\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-nzz4p" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.211274 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-4zk9c"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.212048 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4zk9c" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.219545 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-gwb7k" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.229522 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-wpm9z"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.230325 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-wpm9z" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.243711 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-vskbm" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.247879 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk5nf\" (UniqueName: \"kubernetes.io/projected/0222c7fe-6311-4445-bf7f-e43fcb5ec5f9-kube-api-access-xk5nf\") pod \"ironic-operator-controller-manager-5f4b8bd54d-jmvqq\" (UID: \"0222c7fe-6311-4445-bf7f-e43fcb5ec5f9\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-jmvqq" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.249744 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-c9lwb"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.250588 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-c9lwb" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.265709 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-5xbfp" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.268147 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-4zk9c"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.273527 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-d8nns"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.274398 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-d8nns" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.288327 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-xp7ph" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.310336 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j4bz\" (UniqueName: \"kubernetes.io/projected/46313c01-1f03-4185-b7c4-2da5420bd703-kube-api-access-8j4bz\") pod \"keystone-operator-controller-manager-84f48565d4-nzz4p\" (UID: \"46313c01-1f03-4185-b7c4-2da5420bd703\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-nzz4p" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.310406 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2k7m\" (UniqueName: \"kubernetes.io/projected/60d10db6-9c42-471b-84fb-58e9c04c60fc-kube-api-access-w2k7m\") pod \"octavia-operator-controller-manager-6687f8d877-wpm9z\" (UID: \"60d10db6-9c42-471b-84fb-58e9c04c60fc\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-wpm9z" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.310443 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8ds8\" (UniqueName: \"kubernetes.io/projected/b7d68fac-cffb-4dd6-8c1b-4537a3a36571-kube-api-access-j8ds8\") pod \"nova-operator-controller-manager-55bff696bd-c9lwb\" (UID: \"b7d68fac-cffb-4dd6-8c1b-4537a3a36571\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-c9lwb" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.310464 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8zw5\" (UniqueName: \"kubernetes.io/projected/bfe64bf6-fea9-4b04-b4ff-74fe4b9c2ece-kube-api-access-q8zw5\") pod \"mariadb-operator-controller-manager-67bf948998-nsf9v\" (UID: \"bfe64bf6-fea9-4b04-b4ff-74fe4b9c2ece\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nsf9v" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.310493 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdpzs\" (UniqueName: \"kubernetes.io/projected/590654af-c639-4e9d-b821-c6caa1016695-kube-api-access-jdpzs\") pod \"manila-operator-controller-manager-7dd968899f-kz2zn\" (UID: \"590654af-c639-4e9d-b821-c6caa1016695\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-kz2zn" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.310533 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdsnm\" (UniqueName: \"kubernetes.io/projected/95d96e63-61f2-4d8d-be72-562384cb6f23-kube-api-access-hdsnm\") pod \"neutron-operator-controller-manager-585dbc889-4zk9c\" (UID: \"95d96e63-61f2-4d8d-be72-562384cb6f23\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4zk9c" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.322871 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-c9lwb"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.326601 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-wpm9z"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.333362 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j4bz\" (UniqueName: \"kubernetes.io/projected/46313c01-1f03-4185-b7c4-2da5420bd703-kube-api-access-8j4bz\") pod \"keystone-operator-controller-manager-84f48565d4-nzz4p\" (UID: \"46313c01-1f03-4185-b7c4-2da5420bd703\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-nzz4p" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.333718 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.334510 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.336877 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.337173 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-4btph" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.338918 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-stkw6" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.342449 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8zw5\" (UniqueName: \"kubernetes.io/projected/bfe64bf6-fea9-4b04-b4ff-74fe4b9c2ece-kube-api-access-q8zw5\") pod \"mariadb-operator-controller-manager-67bf948998-nsf9v\" (UID: \"bfe64bf6-fea9-4b04-b4ff-74fe4b9c2ece\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nsf9v" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.343724 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-jmvqq" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.352935 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdpzs\" (UniqueName: \"kubernetes.io/projected/590654af-c639-4e9d-b821-c6caa1016695-kube-api-access-jdpzs\") pod \"manila-operator-controller-manager-7dd968899f-kz2zn\" (UID: \"590654af-c639-4e9d-b821-c6caa1016695\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-kz2zn" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.358383 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jknjh" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.363574 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-qlxtv"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.364726 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-qlxtv" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.371393 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-fxfcc" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.400104 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-d8nns"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.411533 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8ds8\" (UniqueName: \"kubernetes.io/projected/b7d68fac-cffb-4dd6-8c1b-4537a3a36571-kube-api-access-j8ds8\") pod \"nova-operator-controller-manager-55bff696bd-c9lwb\" (UID: \"b7d68fac-cffb-4dd6-8c1b-4537a3a36571\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-c9lwb" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.411902 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb\" (UID: \"5e7a9701-ed45-4289-8272-f850efbf1e75\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.411940 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mn6p\" (UniqueName: \"kubernetes.io/projected/5e7a9701-ed45-4289-8272-f850efbf1e75-kube-api-access-9mn6p\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb\" (UID: \"5e7a9701-ed45-4289-8272-f850efbf1e75\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.412032 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdsnm\" (UniqueName: \"kubernetes.io/projected/95d96e63-61f2-4d8d-be72-562384cb6f23-kube-api-access-hdsnm\") pod \"neutron-operator-controller-manager-585dbc889-4zk9c\" (UID: \"95d96e63-61f2-4d8d-be72-562384cb6f23\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4zk9c" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.412168 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2k7m\" (UniqueName: \"kubernetes.io/projected/60d10db6-9c42-471b-84fb-58e9c04c60fc-kube-api-access-w2k7m\") pod \"octavia-operator-controller-manager-6687f8d877-wpm9z\" (UID: \"60d10db6-9c42-471b-84fb-58e9c04c60fc\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-wpm9z" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.412240 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znpcb\" (UniqueName: \"kubernetes.io/projected/255c38ec-b5b8-4017-94b8-93553884ed09-kube-api-access-znpcb\") pod \"ovn-operator-controller-manager-788c46999f-d8nns\" (UID: \"255c38ec-b5b8-4017-94b8-93553884ed09\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-d8nns" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.426345 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-qlxtv"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.434372 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdsnm\" (UniqueName: \"kubernetes.io/projected/95d96e63-61f2-4d8d-be72-562384cb6f23-kube-api-access-hdsnm\") pod \"neutron-operator-controller-manager-585dbc889-4zk9c\" (UID: \"95d96e63-61f2-4d8d-be72-562384cb6f23\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4zk9c" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.436272 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8ds8\" (UniqueName: \"kubernetes.io/projected/b7d68fac-cffb-4dd6-8c1b-4537a3a36571-kube-api-access-j8ds8\") pod \"nova-operator-controller-manager-55bff696bd-c9lwb\" (UID: \"b7d68fac-cffb-4dd6-8c1b-4537a3a36571\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-c9lwb" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.443410 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2k7m\" (UniqueName: \"kubernetes.io/projected/60d10db6-9c42-471b-84fb-58e9c04c60fc-kube-api-access-w2k7m\") pod \"octavia-operator-controller-manager-6687f8d877-wpm9z\" (UID: \"60d10db6-9c42-471b-84fb-58e9c04c60fc\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-wpm9z" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.453335 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-lbjfv"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.454012 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-lbjfv" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.456641 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-4dg6v" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.464542 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-skdgw" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.501835 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-q7vh6"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.502681 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-q7vh6" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.504612 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-6mzl2" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.513314 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znpcb\" (UniqueName: \"kubernetes.io/projected/255c38ec-b5b8-4017-94b8-93553884ed09-kube-api-access-znpcb\") pod \"ovn-operator-controller-manager-788c46999f-d8nns\" (UID: \"255c38ec-b5b8-4017-94b8-93553884ed09\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-d8nns" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.513716 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjxdq\" (UniqueName: \"kubernetes.io/projected/58dd3197-be46-474d-84f5-c066a9483a52-kube-api-access-fjxdq\") pod \"placement-operator-controller-manager-5b964cf4cd-qlxtv\" (UID: \"58dd3197-be46-474d-84f5-c066a9483a52\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-qlxtv" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.513857 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgfkj\" (UniqueName: \"kubernetes.io/projected/6344fbd8-d71a-4461-ad9a-ad71e339ba03-kube-api-access-hgfkj\") pod \"swift-operator-controller-manager-68fc8c869-lbjfv\" (UID: \"6344fbd8-d71a-4461-ad9a-ad71e339ba03\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-lbjfv" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.513899 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb\" (UID: \"5e7a9701-ed45-4289-8272-f850efbf1e75\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.513956 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mn6p\" (UniqueName: \"kubernetes.io/projected/5e7a9701-ed45-4289-8272-f850efbf1e75-kube-api-access-9mn6p\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb\" (UID: \"5e7a9701-ed45-4289-8272-f850efbf1e75\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" Feb 02 07:02:49 crc kubenswrapper[4842]: E0202 07:02:49.513984 4842 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 07:02:49 crc kubenswrapper[4842]: E0202 07:02:49.514047 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert podName:5e7a9701-ed45-4289-8272-f850efbf1e75 nodeName:}" failed. No retries permitted until 2026-02-02 07:02:50.014028111 +0000 UTC m=+995.391296023 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" (UID: "5e7a9701-ed45-4289-8272-f850efbf1e75") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.524289 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.529809 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-lbjfv"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.530308 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znpcb\" (UniqueName: \"kubernetes.io/projected/255c38ec-b5b8-4017-94b8-93553884ed09-kube-api-access-znpcb\") pod \"ovn-operator-controller-manager-788c46999f-d8nns\" (UID: \"255c38ec-b5b8-4017-94b8-93553884ed09\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-d8nns" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.533980 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mn6p\" (UniqueName: \"kubernetes.io/projected/5e7a9701-ed45-4289-8272-f850efbf1e75-kube-api-access-9mn6p\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb\" (UID: \"5e7a9701-ed45-4289-8272-f850efbf1e75\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.537957 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-q7vh6"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.544006 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-kz2zn" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.572319 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-4q9m5"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.573393 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4q9m5" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.574630 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-nzz4p" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.577316 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-7bj8g" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.597861 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-4q9m5"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.609013 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-c9lwb" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.617306 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgfkj\" (UniqueName: \"kubernetes.io/projected/6344fbd8-d71a-4461-ad9a-ad71e339ba03-kube-api-access-hgfkj\") pod \"swift-operator-controller-manager-68fc8c869-lbjfv\" (UID: \"6344fbd8-d71a-4461-ad9a-ad71e339ba03\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-lbjfv" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.617367 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert\") pod \"infra-operator-controller-manager-79955696d6-b9qjw\" (UID: \"a020d6c0-e749-4442-93e8-64a4c463e9d5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.617496 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjxdq\" (UniqueName: \"kubernetes.io/projected/58dd3197-be46-474d-84f5-c066a9483a52-kube-api-access-fjxdq\") pod \"placement-operator-controller-manager-5b964cf4cd-qlxtv\" (UID: \"58dd3197-be46-474d-84f5-c066a9483a52\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-qlxtv" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.617544 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rszm7\" (UniqueName: \"kubernetes.io/projected/7db6967e-a602-49a0-83f6-e1caff831173-kube-api-access-rszm7\") pod \"telemetry-operator-controller-manager-64b5b76f97-q7vh6\" (UID: \"7db6967e-a602-49a0-83f6-e1caff831173\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-q7vh6" Feb 02 07:02:49 crc kubenswrapper[4842]: E0202 07:02:49.617955 4842 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 07:02:49 crc kubenswrapper[4842]: E0202 07:02:49.618007 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert podName:a020d6c0-e749-4442-93e8-64a4c463e9d5 nodeName:}" failed. No retries permitted until 2026-02-02 07:02:50.617992542 +0000 UTC m=+995.995260454 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert") pod "infra-operator-controller-manager-79955696d6-b9qjw" (UID: "a020d6c0-e749-4442-93e8-64a4c463e9d5") : secret "infra-operator-webhook-server-cert" not found Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.633115 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nsf9v" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.635765 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjxdq\" (UniqueName: \"kubernetes.io/projected/58dd3197-be46-474d-84f5-c066a9483a52-kube-api-access-fjxdq\") pod \"placement-operator-controller-manager-5b964cf4cd-qlxtv\" (UID: \"58dd3197-be46-474d-84f5-c066a9483a52\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-qlxtv" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.640978 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgfkj\" (UniqueName: \"kubernetes.io/projected/6344fbd8-d71a-4461-ad9a-ad71e339ba03-kube-api-access-hgfkj\") pod \"swift-operator-controller-manager-68fc8c869-lbjfv\" (UID: \"6344fbd8-d71a-4461-ad9a-ad71e339ba03\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-lbjfv" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.647481 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4zk9c" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.674047 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-wpm9z" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.711884 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-d8nns" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.718760 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rszm7\" (UniqueName: \"kubernetes.io/projected/7db6967e-a602-49a0-83f6-e1caff831173-kube-api-access-rszm7\") pod \"telemetry-operator-controller-manager-64b5b76f97-q7vh6\" (UID: \"7db6967e-a602-49a0-83f6-e1caff831173\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-q7vh6" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.718800 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5hwl\" (UniqueName: \"kubernetes.io/projected/3fb9fda7-8167-4f3d-947b-3e002278ad99-kube-api-access-v5hwl\") pod \"test-operator-controller-manager-56f8bfcd9f-4q9m5\" (UID: \"3fb9fda7-8167-4f3d-947b-3e002278ad99\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4q9m5" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.724314 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-4ndxm"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.725386 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-4ndxm" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.729074 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-q4pcp" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.729455 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-qlxtv" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.741555 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-4ndxm"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.749231 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rszm7\" (UniqueName: \"kubernetes.io/projected/7db6967e-a602-49a0-83f6-e1caff831173-kube-api-access-rszm7\") pod \"telemetry-operator-controller-manager-64b5b76f97-q7vh6\" (UID: \"7db6967e-a602-49a0-83f6-e1caff831173\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-q7vh6" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.768169 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.769529 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.771474 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.771593 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.773203 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-j9fct" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.777919 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.778873 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-lbjfv" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.820669 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj64g\" (UniqueName: \"kubernetes.io/projected/de128384-b923-4536-a485-33e65a1b7e04-kube-api-access-sj64g\") pod \"watcher-operator-controller-manager-564965969-4ndxm\" (UID: \"de128384-b923-4536-a485-33e65a1b7e04\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-4ndxm" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.820725 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.820779 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5hwl\" (UniqueName: \"kubernetes.io/projected/3fb9fda7-8167-4f3d-947b-3e002278ad99-kube-api-access-v5hwl\") pod \"test-operator-controller-manager-56f8bfcd9f-4q9m5\" (UID: \"3fb9fda7-8167-4f3d-947b-3e002278ad99\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4q9m5" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.820800 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzrcg\" (UniqueName: \"kubernetes.io/projected/6b1810ad-df0b-44b5-8ba8-953039b85411-kube-api-access-hzrcg\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.820975 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.838046 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zbqhn"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.839008 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zbqhn" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.842162 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-75lch" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.842885 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5hwl\" (UniqueName: \"kubernetes.io/projected/3fb9fda7-8167-4f3d-947b-3e002278ad99-kube-api-access-v5hwl\") pod \"test-operator-controller-manager-56f8bfcd9f-4q9m5\" (UID: \"3fb9fda7-8167-4f3d-947b-3e002278ad99\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4q9m5" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.842981 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-q7vh6" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.860166 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zbqhn"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.875296 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-xq5nz"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.884541 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-4hrlz"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.897357 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-96sfj"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.912277 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4q9m5" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.923826 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj64g\" (UniqueName: \"kubernetes.io/projected/de128384-b923-4536-a485-33e65a1b7e04-kube-api-access-sj64g\") pod \"watcher-operator-controller-manager-564965969-4ndxm\" (UID: \"de128384-b923-4536-a485-33e65a1b7e04\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-4ndxm" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.923878 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.923902 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzrcg\" (UniqueName: \"kubernetes.io/projected/6b1810ad-df0b-44b5-8ba8-953039b85411-kube-api-access-hzrcg\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.923963 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.923988 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl77c\" (UniqueName: \"kubernetes.io/projected/1fffe017-3a94-4565-9778-ccea208aa8cc-kube-api-access-xl77c\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zbqhn\" (UID: \"1fffe017-3a94-4565-9778-ccea208aa8cc\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zbqhn" Feb 02 07:02:49 crc kubenswrapper[4842]: E0202 07:02:49.924367 4842 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 07:02:49 crc kubenswrapper[4842]: E0202 07:02:49.924392 4842 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 07:02:49 crc kubenswrapper[4842]: E0202 07:02:49.924406 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs podName:6b1810ad-df0b-44b5-8ba8-953039b85411 nodeName:}" failed. No retries permitted until 2026-02-02 07:02:50.42439154 +0000 UTC m=+995.801659452 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-bwmdm" (UID: "6b1810ad-df0b-44b5-8ba8-953039b85411") : secret "metrics-server-cert" not found Feb 02 07:02:49 crc kubenswrapper[4842]: E0202 07:02:49.924484 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs podName:6b1810ad-df0b-44b5-8ba8-953039b85411 nodeName:}" failed. No retries permitted until 2026-02-02 07:02:50.424456592 +0000 UTC m=+995.801724504 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-bwmdm" (UID: "6b1810ad-df0b-44b5-8ba8-953039b85411") : secret "webhook-server-cert" not found Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.945355 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj64g\" (UniqueName: \"kubernetes.io/projected/de128384-b923-4536-a485-33e65a1b7e04-kube-api-access-sj64g\") pod \"watcher-operator-controller-manager-564965969-4ndxm\" (UID: \"de128384-b923-4536-a485-33e65a1b7e04\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-4ndxm" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.950015 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzrcg\" (UniqueName: \"kubernetes.io/projected/6b1810ad-df0b-44b5-8ba8-953039b85411-kube-api-access-hzrcg\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.964733 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-stkw6"] Feb 02 07:02:49 crc kubenswrapper[4842]: W0202 07:02:49.974953 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc679df42_e383_4a11_a50d_af9dbd4c4eb0.slice/crio-bc10e6ed006e4ba103656f85a5dd8ef40f7073a183bca2747d2f96837ce00b2a WatchSource:0}: Error finding container bc10e6ed006e4ba103656f85a5dd8ef40f7073a183bca2747d2f96837ce00b2a: Status 404 returned error can't find the container with id bc10e6ed006e4ba103656f85a5dd8ef40f7073a183bca2747d2f96837ce00b2a Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.002757 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-jmvqq"] Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.026093 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb\" (UID: \"5e7a9701-ed45-4289-8272-f850efbf1e75\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.026193 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl77c\" (UniqueName: \"kubernetes.io/projected/1fffe017-3a94-4565-9778-ccea208aa8cc-kube-api-access-xl77c\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zbqhn\" (UID: \"1fffe017-3a94-4565-9778-ccea208aa8cc\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zbqhn" Feb 02 07:02:50 crc kubenswrapper[4842]: E0202 07:02:50.026586 4842 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 07:02:50 crc kubenswrapper[4842]: E0202 07:02:50.026626 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert podName:5e7a9701-ed45-4289-8272-f850efbf1e75 nodeName:}" failed. No retries permitted until 2026-02-02 07:02:51.026612848 +0000 UTC m=+996.403880760 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" (UID: "5e7a9701-ed45-4289-8272-f850efbf1e75") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.061941 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl77c\" (UniqueName: \"kubernetes.io/projected/1fffe017-3a94-4565-9778-ccea208aa8cc-kube-api-access-xl77c\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zbqhn\" (UID: \"1fffe017-3a94-4565-9778-ccea208aa8cc\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zbqhn" Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.072270 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-4ndxm" Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.107976 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-jknjh"] Feb 02 07:02:50 crc kubenswrapper[4842]: W0202 07:02:50.137696 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0222c7fe_6311_4445_bf7f_e43fcb5ec5f9.slice/crio-34b37d4f5ec7248ec3e0f5402e96c8a604dad15cb56cd20d1c7edf7a407ac79b WatchSource:0}: Error finding container 34b37d4f5ec7248ec3e0f5402e96c8a604dad15cb56cd20d1c7edf7a407ac79b: Status 404 returned error can't find the container with id 34b37d4f5ec7248ec3e0f5402e96c8a604dad15cb56cd20d1c7edf7a407ac79b Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.200900 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zbqhn" Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.435702 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.436172 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:02:50 crc kubenswrapper[4842]: E0202 07:02:50.435912 4842 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 07:02:50 crc kubenswrapper[4842]: E0202 07:02:50.436264 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs podName:6b1810ad-df0b-44b5-8ba8-953039b85411 nodeName:}" failed. No retries permitted until 2026-02-02 07:02:51.436247679 +0000 UTC m=+996.813515591 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-bwmdm" (UID: "6b1810ad-df0b-44b5-8ba8-953039b85411") : secret "webhook-server-cert" not found Feb 02 07:02:50 crc kubenswrapper[4842]: E0202 07:02:50.436392 4842 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 07:02:50 crc kubenswrapper[4842]: E0202 07:02:50.436451 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs podName:6b1810ad-df0b-44b5-8ba8-953039b85411 nodeName:}" failed. No retries permitted until 2026-02-02 07:02:51.436434473 +0000 UTC m=+996.813702465 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-bwmdm" (UID: "6b1810ad-df0b-44b5-8ba8-953039b85411") : secret "metrics-server-cert" not found Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.515432 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-kz2zn"] Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.522488 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-c9lwb"] Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.534373 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-4zk9c"] Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.535366 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-nzz4p"] Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.546066 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-skdgw"] Feb 02 07:02:50 crc kubenswrapper[4842]: W0202 07:02:50.547181 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod590654af_c639_4e9d_b821_c6caa1016695.slice/crio-31e698a79489776b6eeff8812febf764bbeb34a202f6ccfef3c167d6f6c64b44 WatchSource:0}: Error finding container 31e698a79489776b6eeff8812febf764bbeb34a202f6ccfef3c167d6f6c64b44: Status 404 returned error can't find the container with id 31e698a79489776b6eeff8812febf764bbeb34a202f6ccfef3c167d6f6c64b44 Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.608657 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-d8nns"] Feb 02 07:02:50 crc kubenswrapper[4842]: W0202 07:02:50.608714 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod255c38ec_b5b8_4017_94b8_93553884ed09.slice/crio-8c3e31bf5ae071a7011ceaaa2903b81b2952361cac84a6e49244a9873ae82830 WatchSource:0}: Error finding container 8c3e31bf5ae071a7011ceaaa2903b81b2952361cac84a6e49244a9873ae82830: Status 404 returned error can't find the container with id 8c3e31bf5ae071a7011ceaaa2903b81b2952361cac84a6e49244a9873ae82830 Feb 02 07:02:50 crc kubenswrapper[4842]: W0202 07:02:50.610352 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60d10db6_9c42_471b_84fb_58e9c04c60fc.slice/crio-7f4e9c051a66af5fc910d29468d1143d028a4610ca9e1284ab19465b9be58d23 WatchSource:0}: Error finding container 7f4e9c051a66af5fc910d29468d1143d028a4610ca9e1284ab19465b9be58d23: Status 404 returned error can't find the container with id 7f4e9c051a66af5fc910d29468d1143d028a4610ca9e1284ab19465b9be58d23 Feb 02 07:02:50 crc kubenswrapper[4842]: W0202 07:02:50.614191 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfe64bf6_fea9_4b04_b4ff_74fe4b9c2ece.slice/crio-0c3a90cf4a939d9ae4e732d538175adbe65c88f7ab17378dac13b73ab664b905 WatchSource:0}: Error finding container 0c3a90cf4a939d9ae4e732d538175adbe65c88f7ab17378dac13b73ab664b905: Status 404 returned error can't find the container with id 0c3a90cf4a939d9ae4e732d538175adbe65c88f7ab17378dac13b73ab664b905 Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.619301 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-nsf9v"] Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.625054 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-wpm9z"] Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.638926 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nsf9v" event={"ID":"bfe64bf6-fea9-4b04-b4ff-74fe4b9c2ece","Type":"ContainerStarted","Data":"0c3a90cf4a939d9ae4e732d538175adbe65c88f7ab17378dac13b73ab664b905"} Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.640667 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-xq5nz" event={"ID":"bd7497e1-afb6-44b5-8270-1021f837a65a","Type":"ContainerStarted","Data":"3c03de3673e9d65cdc99b54b699a6399d710d9434194e90b9639ec136030d25d"} Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.642207 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-nzz4p" event={"ID":"46313c01-1f03-4185-b7c4-2da5420bd703","Type":"ContainerStarted","Data":"9f519153c63f4d593ed30e9c77236fe5bb587b497e279fa9c0d2e63e9697ef28"} Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.642869 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert\") pod \"infra-operator-controller-manager-79955696d6-b9qjw\" (UID: \"a020d6c0-e749-4442-93e8-64a4c463e9d5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" Feb 02 07:02:50 crc kubenswrapper[4842]: E0202 07:02:50.642974 4842 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 07:02:50 crc kubenswrapper[4842]: E0202 07:02:50.643039 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert podName:a020d6c0-e749-4442-93e8-64a4c463e9d5 nodeName:}" failed. No retries permitted until 2026-02-02 07:02:52.643021122 +0000 UTC m=+998.020289034 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert") pod "infra-operator-controller-manager-79955696d6-b9qjw" (UID: "a020d6c0-e749-4442-93e8-64a4c463e9d5") : secret "infra-operator-webhook-server-cert" not found Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.643155 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-96sfj" event={"ID":"17af9a3f-7823-4340-bebc-e50e11807467","Type":"ContainerStarted","Data":"25c1217cd3dd79d04727016625e46be9d83e1e7c7a418748539a40f220891e63"} Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.644238 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jknjh" event={"ID":"79c1d3d0-ca85-4bbf-a7a7-74d260b5d4b1","Type":"ContainerStarted","Data":"31fe857424e518dc59c8d4f98cd6183be8851b78e815706f9f4844860fab74fb"} Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.645238 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4zk9c" event={"ID":"95d96e63-61f2-4d8d-be72-562384cb6f23","Type":"ContainerStarted","Data":"1da9fd5908869a15666046e0dbecdf5c1108fd3064ba24131f41af4670151ac7"} Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.646241 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4hrlz" event={"ID":"bda41d33-cd37-4c4d-99d6-3808993000b4","Type":"ContainerStarted","Data":"aa46e9d4e396ca970079ecd6a3351ba8f4a5995de208c81cbeb836d9f5a06dd0"} Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.647062 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-skdgw" event={"ID":"95850a5b-9e70-4f77-86ee-ff016eae6e7e","Type":"ContainerStarted","Data":"a0a91e07e908835f6e3ea0cb6f133a874ead96fe2a6f7b48fdee1f6c4a8a07ca"} Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.648553 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-d8nns" event={"ID":"255c38ec-b5b8-4017-94b8-93553884ed09","Type":"ContainerStarted","Data":"8c3e31bf5ae071a7011ceaaa2903b81b2952361cac84a6e49244a9873ae82830"} Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.649403 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-stkw6" event={"ID":"c679df42-e383-4a11-a50d-af9dbd4c4eb0","Type":"ContainerStarted","Data":"bc10e6ed006e4ba103656f85a5dd8ef40f7073a183bca2747d2f96837ce00b2a"} Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.650403 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-wpm9z" event={"ID":"60d10db6-9c42-471b-84fb-58e9c04c60fc","Type":"ContainerStarted","Data":"7f4e9c051a66af5fc910d29468d1143d028a4610ca9e1284ab19465b9be58d23"} Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.651722 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-jmvqq" event={"ID":"0222c7fe-6311-4445-bf7f-e43fcb5ec5f9","Type":"ContainerStarted","Data":"34b37d4f5ec7248ec3e0f5402e96c8a604dad15cb56cd20d1c7edf7a407ac79b"} Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.653082 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-kz2zn" event={"ID":"590654af-c639-4e9d-b821-c6caa1016695","Type":"ContainerStarted","Data":"31e698a79489776b6eeff8812febf764bbeb34a202f6ccfef3c167d6f6c64b44"} Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.654370 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-c9lwb" event={"ID":"b7d68fac-cffb-4dd6-8c1b-4537a3a36571","Type":"ContainerStarted","Data":"9cdbbc0b3c68ecb4483e024288faf64e636d49d33783d443c6db9d3f1ff28cfa"} Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.759741 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zbqhn"] Feb 02 07:02:50 crc kubenswrapper[4842]: W0202 07:02:50.761202 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fffe017_3a94_4565_9778_ccea208aa8cc.slice/crio-9de5bb10a9320eb1c77f72610b75362036120760815a91e3fff347ff521f0a98 WatchSource:0}: Error finding container 9de5bb10a9320eb1c77f72610b75362036120760815a91e3fff347ff521f0a98: Status 404 returned error can't find the container with id 9de5bb10a9320eb1c77f72610b75362036120760815a91e3fff347ff521f0a98 Feb 02 07:02:50 crc kubenswrapper[4842]: E0202 07:02:50.763562 4842 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xl77c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-zbqhn_openstack-operators(1fffe017-3a94-4565-9778-ccea208aa8cc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.764287 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-qlxtv"] Feb 02 07:02:50 crc kubenswrapper[4842]: E0202 07:02:50.764828 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zbqhn" podUID="1fffe017-3a94-4565-9778-ccea208aa8cc" Feb 02 07:02:50 crc kubenswrapper[4842]: W0202 07:02:50.765558 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58dd3197_be46_474d_84f5_c066a9483a52.slice/crio-83f613f0808adf7569a397c7486faf613540e1ecc9fa415488800508f4f1a434 WatchSource:0}: Error finding container 83f613f0808adf7569a397c7486faf613540e1ecc9fa415488800508f4f1a434: Status 404 returned error can't find the container with id 83f613f0808adf7569a397c7486faf613540e1ecc9fa415488800508f4f1a434 Feb 02 07:02:50 crc kubenswrapper[4842]: E0202 07:02:50.768030 4842 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fjxdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-qlxtv_openstack-operators(58dd3197-be46-474d-84f5-c066a9483a52): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 07:02:50 crc kubenswrapper[4842]: E0202 07:02:50.769284 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-qlxtv" podUID="58dd3197-be46-474d-84f5-c066a9483a52" Feb 02 07:02:51 crc kubenswrapper[4842]: I0202 07:02:51.052174 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb\" (UID: \"5e7a9701-ed45-4289-8272-f850efbf1e75\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" Feb 02 07:02:51 crc kubenswrapper[4842]: E0202 07:02:51.052331 4842 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 07:02:51 crc kubenswrapper[4842]: E0202 07:02:51.052388 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert podName:5e7a9701-ed45-4289-8272-f850efbf1e75 nodeName:}" failed. No retries permitted until 2026-02-02 07:02:53.052373836 +0000 UTC m=+998.429641748 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" (UID: "5e7a9701-ed45-4289-8272-f850efbf1e75") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 07:02:51 crc kubenswrapper[4842]: I0202 07:02:51.794050 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:02:51 crc kubenswrapper[4842]: I0202 07:02:51.794777 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:02:51 crc kubenswrapper[4842]: E0202 07:02:51.794913 4842 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 07:02:51 crc kubenswrapper[4842]: E0202 07:02:51.794969 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs podName:6b1810ad-df0b-44b5-8ba8-953039b85411 nodeName:}" failed. No retries permitted until 2026-02-02 07:02:53.794948848 +0000 UTC m=+999.172216760 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-bwmdm" (UID: "6b1810ad-df0b-44b5-8ba8-953039b85411") : secret "metrics-server-cert" not found Feb 02 07:02:51 crc kubenswrapper[4842]: E0202 07:02:51.795023 4842 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 07:02:51 crc kubenswrapper[4842]: E0202 07:02:51.795049 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs podName:6b1810ad-df0b-44b5-8ba8-953039b85411 nodeName:}" failed. No retries permitted until 2026-02-02 07:02:53.795040571 +0000 UTC m=+999.172308483 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-bwmdm" (UID: "6b1810ad-df0b-44b5-8ba8-953039b85411") : secret "webhook-server-cert" not found Feb 02 07:02:51 crc kubenswrapper[4842]: I0202 07:02:51.874185 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-4q9m5"] Feb 02 07:02:51 crc kubenswrapper[4842]: I0202 07:02:51.874247 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-lbjfv"] Feb 02 07:02:51 crc kubenswrapper[4842]: I0202 07:02:51.875815 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-qlxtv" event={"ID":"58dd3197-be46-474d-84f5-c066a9483a52","Type":"ContainerStarted","Data":"83f613f0808adf7569a397c7486faf613540e1ecc9fa415488800508f4f1a434"} Feb 02 07:02:51 crc kubenswrapper[4842]: I0202 07:02:51.876431 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-q7vh6"] Feb 02 07:02:51 crc kubenswrapper[4842]: E0202 07:02:51.878794 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-qlxtv" podUID="58dd3197-be46-474d-84f5-c066a9483a52" Feb 02 07:02:51 crc kubenswrapper[4842]: I0202 07:02:51.884910 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-4ndxm"] Feb 02 07:02:51 crc kubenswrapper[4842]: I0202 07:02:51.885969 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zbqhn" event={"ID":"1fffe017-3a94-4565-9778-ccea208aa8cc","Type":"ContainerStarted","Data":"9de5bb10a9320eb1c77f72610b75362036120760815a91e3fff347ff521f0a98"} Feb 02 07:02:51 crc kubenswrapper[4842]: E0202 07:02:51.888464 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zbqhn" podUID="1fffe017-3a94-4565-9778-ccea208aa8cc" Feb 02 07:02:52 crc kubenswrapper[4842]: W0202 07:02:52.482245 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fb9fda7_8167_4f3d_947b_3e002278ad99.slice/crio-621825b8357cbd5ed5e161eb1ac76adaaa6ddbd9ad3dc2d008ce79ce776eb9d8 WatchSource:0}: Error finding container 621825b8357cbd5ed5e161eb1ac76adaaa6ddbd9ad3dc2d008ce79ce776eb9d8: Status 404 returned error can't find the container with id 621825b8357cbd5ed5e161eb1ac76adaaa6ddbd9ad3dc2d008ce79ce776eb9d8 Feb 02 07:02:52 crc kubenswrapper[4842]: I0202 07:02:52.704567 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert\") pod \"infra-operator-controller-manager-79955696d6-b9qjw\" (UID: \"a020d6c0-e749-4442-93e8-64a4c463e9d5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" Feb 02 07:02:52 crc kubenswrapper[4842]: E0202 07:02:52.704701 4842 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 07:02:52 crc kubenswrapper[4842]: E0202 07:02:52.704755 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert podName:a020d6c0-e749-4442-93e8-64a4c463e9d5 nodeName:}" failed. No retries permitted until 2026-02-02 07:02:56.70473643 +0000 UTC m=+1002.082004332 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert") pod "infra-operator-controller-manager-79955696d6-b9qjw" (UID: "a020d6c0-e749-4442-93e8-64a4c463e9d5") : secret "infra-operator-webhook-server-cert" not found Feb 02 07:02:52 crc kubenswrapper[4842]: I0202 07:02:52.895522 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4q9m5" event={"ID":"3fb9fda7-8167-4f3d-947b-3e002278ad99","Type":"ContainerStarted","Data":"621825b8357cbd5ed5e161eb1ac76adaaa6ddbd9ad3dc2d008ce79ce776eb9d8"} Feb 02 07:02:52 crc kubenswrapper[4842]: E0202 07:02:52.897188 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-qlxtv" podUID="58dd3197-be46-474d-84f5-c066a9483a52" Feb 02 07:02:52 crc kubenswrapper[4842]: E0202 07:02:52.898448 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zbqhn" podUID="1fffe017-3a94-4565-9778-ccea208aa8cc" Feb 02 07:02:53 crc kubenswrapper[4842]: W0202 07:02:53.107700 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7db6967e_a602_49a0_83f6_e1caff831173.slice/crio-6f1aa3346367608698f717b93c278fb081a45071254640485f6fc994679ae853 WatchSource:0}: Error finding container 6f1aa3346367608698f717b93c278fb081a45071254640485f6fc994679ae853: Status 404 returned error can't find the container with id 6f1aa3346367608698f717b93c278fb081a45071254640485f6fc994679ae853 Feb 02 07:02:53 crc kubenswrapper[4842]: I0202 07:02:53.112382 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb\" (UID: \"5e7a9701-ed45-4289-8272-f850efbf1e75\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" Feb 02 07:02:53 crc kubenswrapper[4842]: E0202 07:02:53.112514 4842 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 07:02:53 crc kubenswrapper[4842]: E0202 07:02:53.112562 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert podName:5e7a9701-ed45-4289-8272-f850efbf1e75 nodeName:}" failed. No retries permitted until 2026-02-02 07:02:57.112546985 +0000 UTC m=+1002.489814907 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" (UID: "5e7a9701-ed45-4289-8272-f850efbf1e75") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 07:02:53 crc kubenswrapper[4842]: I0202 07:02:53.822066 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:02:53 crc kubenswrapper[4842]: I0202 07:02:53.822414 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:02:53 crc kubenswrapper[4842]: E0202 07:02:53.822255 4842 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 07:02:53 crc kubenswrapper[4842]: E0202 07:02:53.822503 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs podName:6b1810ad-df0b-44b5-8ba8-953039b85411 nodeName:}" failed. No retries permitted until 2026-02-02 07:02:57.822475183 +0000 UTC m=+1003.199743095 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-bwmdm" (UID: "6b1810ad-df0b-44b5-8ba8-953039b85411") : secret "webhook-server-cert" not found Feb 02 07:02:53 crc kubenswrapper[4842]: E0202 07:02:53.822549 4842 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 07:02:53 crc kubenswrapper[4842]: E0202 07:02:53.822598 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs podName:6b1810ad-df0b-44b5-8ba8-953039b85411 nodeName:}" failed. No retries permitted until 2026-02-02 07:02:57.822583755 +0000 UTC m=+1003.199851667 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-bwmdm" (UID: "6b1810ad-df0b-44b5-8ba8-953039b85411") : secret "metrics-server-cert" not found Feb 02 07:02:53 crc kubenswrapper[4842]: I0202 07:02:53.917314 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-q7vh6" event={"ID":"7db6967e-a602-49a0-83f6-e1caff831173","Type":"ContainerStarted","Data":"6f1aa3346367608698f717b93c278fb081a45071254640485f6fc994679ae853"} Feb 02 07:02:53 crc kubenswrapper[4842]: I0202 07:02:53.919002 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-lbjfv" event={"ID":"6344fbd8-d71a-4461-ad9a-ad71e339ba03","Type":"ContainerStarted","Data":"528cbaf33968cb73a4060888a4d50295e5bf5e75d4d7c28bbc71839e750edca1"} Feb 02 07:02:53 crc kubenswrapper[4842]: I0202 07:02:53.920414 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-4ndxm" event={"ID":"de128384-b923-4536-a485-33e65a1b7e04","Type":"ContainerStarted","Data":"be8887bf63c6d3d2fd4ff8c2612b2a0ef8096e3ed573a8e17c7fcfdc3145dc28"} Feb 02 07:02:56 crc kubenswrapper[4842]: I0202 07:02:56.714415 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert\") pod \"infra-operator-controller-manager-79955696d6-b9qjw\" (UID: \"a020d6c0-e749-4442-93e8-64a4c463e9d5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" Feb 02 07:02:56 crc kubenswrapper[4842]: E0202 07:02:56.714563 4842 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 07:02:56 crc kubenswrapper[4842]: E0202 07:02:56.714621 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert podName:a020d6c0-e749-4442-93e8-64a4c463e9d5 nodeName:}" failed. No retries permitted until 2026-02-02 07:03:04.714603705 +0000 UTC m=+1010.091871617 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert") pod "infra-operator-controller-manager-79955696d6-b9qjw" (UID: "a020d6c0-e749-4442-93e8-64a4c463e9d5") : secret "infra-operator-webhook-server-cert" not found Feb 02 07:02:57 crc kubenswrapper[4842]: I0202 07:02:57.119627 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb\" (UID: \"5e7a9701-ed45-4289-8272-f850efbf1e75\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" Feb 02 07:02:57 crc kubenswrapper[4842]: E0202 07:02:57.119820 4842 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 07:02:57 crc kubenswrapper[4842]: E0202 07:02:57.120021 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert podName:5e7a9701-ed45-4289-8272-f850efbf1e75 nodeName:}" failed. No retries permitted until 2026-02-02 07:03:05.120004031 +0000 UTC m=+1010.497271943 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" (UID: "5e7a9701-ed45-4289-8272-f850efbf1e75") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 07:02:57 crc kubenswrapper[4842]: I0202 07:02:57.830366 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:02:57 crc kubenswrapper[4842]: I0202 07:02:57.830542 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:02:57 crc kubenswrapper[4842]: E0202 07:02:57.830547 4842 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 07:02:57 crc kubenswrapper[4842]: E0202 07:02:57.830640 4842 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 07:02:57 crc kubenswrapper[4842]: E0202 07:02:57.830643 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs podName:6b1810ad-df0b-44b5-8ba8-953039b85411 nodeName:}" failed. No retries permitted until 2026-02-02 07:03:05.830624336 +0000 UTC m=+1011.207892258 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-bwmdm" (UID: "6b1810ad-df0b-44b5-8ba8-953039b85411") : secret "webhook-server-cert" not found Feb 02 07:02:57 crc kubenswrapper[4842]: E0202 07:02:57.830706 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs podName:6b1810ad-df0b-44b5-8ba8-953039b85411 nodeName:}" failed. No retries permitted until 2026-02-02 07:03:05.830695068 +0000 UTC m=+1011.207962990 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-bwmdm" (UID: "6b1810ad-df0b-44b5-8ba8-953039b85411") : secret "metrics-server-cert" not found Feb 02 07:03:04 crc kubenswrapper[4842]: I0202 07:03:04.747882 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert\") pod \"infra-operator-controller-manager-79955696d6-b9qjw\" (UID: \"a020d6c0-e749-4442-93e8-64a4c463e9d5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" Feb 02 07:03:04 crc kubenswrapper[4842]: E0202 07:03:04.748594 4842 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 07:03:04 crc kubenswrapper[4842]: E0202 07:03:04.748652 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert podName:a020d6c0-e749-4442-93e8-64a4c463e9d5 nodeName:}" failed. No retries permitted until 2026-02-02 07:03:20.748632739 +0000 UTC m=+1026.125900651 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert") pod "infra-operator-controller-manager-79955696d6-b9qjw" (UID: "a020d6c0-e749-4442-93e8-64a4c463e9d5") : secret "infra-operator-webhook-server-cert" not found Feb 02 07:03:04 crc kubenswrapper[4842]: I0202 07:03:04.992829 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-d8nns" event={"ID":"255c38ec-b5b8-4017-94b8-93553884ed09","Type":"ContainerStarted","Data":"a68d2bfdff879f71626aeb99ec77e40470c07c1be76606e1137d2ce34b80668c"} Feb 02 07:03:04 crc kubenswrapper[4842]: I0202 07:03:04.993718 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-d8nns" Feb 02 07:03:04 crc kubenswrapper[4842]: I0202 07:03:04.994829 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-96sfj" event={"ID":"17af9a3f-7823-4340-bebc-e50e11807467","Type":"ContainerStarted","Data":"cc0ac8431577b0a19cfa2645b2a9f92aadf4b862d21a8b09a84245d3aa7d618b"} Feb 02 07:03:04 crc kubenswrapper[4842]: I0202 07:03:04.995188 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-96sfj" Feb 02 07:03:04 crc kubenswrapper[4842]: I0202 07:03:04.996436 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-stkw6" event={"ID":"c679df42-e383-4a11-a50d-af9dbd4c4eb0","Type":"ContainerStarted","Data":"9218e5a1962b1eee936d17d4a2184c3a2ce8f79672d906325f29b4c72c3cedcf"} Feb 02 07:03:04 crc kubenswrapper[4842]: I0202 07:03:04.996762 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-stkw6" Feb 02 07:03:04 crc kubenswrapper[4842]: I0202 07:03:04.998586 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-q7vh6" event={"ID":"7db6967e-a602-49a0-83f6-e1caff831173","Type":"ContainerStarted","Data":"e7a2e1f30bbf786d5b0be89a35166dde264ea2b83d43087ba90c90ee55d2dc03"} Feb 02 07:03:04 crc kubenswrapper[4842]: I0202 07:03:04.998801 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-q7vh6" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.000115 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-wpm9z" event={"ID":"60d10db6-9c42-471b-84fb-58e9c04c60fc","Type":"ContainerStarted","Data":"dabef8d40aa3aadb87fe0ee4f895b59d440356a456e883e20e1c11f9a4643aac"} Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.000260 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-wpm9z" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.001204 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-skdgw" event={"ID":"95850a5b-9e70-4f77-86ee-ff016eae6e7e","Type":"ContainerStarted","Data":"5cf5674601202f50fad92fa117b7e0d95ca06511e5cddbfd639b285d9dea79a6"} Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.001549 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-skdgw" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.002686 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-lbjfv" event={"ID":"6344fbd8-d71a-4461-ad9a-ad71e339ba03","Type":"ContainerStarted","Data":"961f25c4071b420b3c28eec91f6c5050f3efcfce377a7034ff097ae19a75d543"} Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.003010 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-lbjfv" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.004353 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4zk9c" event={"ID":"95d96e63-61f2-4d8d-be72-562384cb6f23","Type":"ContainerStarted","Data":"9a5af20c2b4af945d6c2fa6a18cbf7ef29af80ac7cdda6ad6e779ff6683afa91"} Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.004693 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4zk9c" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.006240 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-nzz4p" event={"ID":"46313c01-1f03-4185-b7c4-2da5420bd703","Type":"ContainerStarted","Data":"c3570aae8356f1f5bb4c29b4df257b759d70d0422b301f0b1b85795932c0cdc5"} Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.006433 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-nzz4p" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.007439 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-jmvqq" event={"ID":"0222c7fe-6311-4445-bf7f-e43fcb5ec5f9","Type":"ContainerStarted","Data":"c992b6b2f75e632507c82e80c4d1782f7fa85c9eb1d5de105398f0dd31698833"} Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.007786 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-jmvqq" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.009260 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nsf9v" event={"ID":"bfe64bf6-fea9-4b04-b4ff-74fe4b9c2ece","Type":"ContainerStarted","Data":"e4d307ab82c2777a3782aef180e59ddbe5b53a1ec6f0d1b9a26d444b3768186d"} Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.009648 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nsf9v" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.010912 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-c9lwb" event={"ID":"b7d68fac-cffb-4dd6-8c1b-4537a3a36571","Type":"ContainerStarted","Data":"ce7bf6be491febf7a3c5ec656f9542c910889c6ee977cef8c6ef3b84d33073d8"} Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.011285 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-c9lwb" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.012501 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4q9m5" event={"ID":"3fb9fda7-8167-4f3d-947b-3e002278ad99","Type":"ContainerStarted","Data":"ea4b6c434d50259cca3dc3812f8a27713347fe01ac4dc1e4240ce6e84ff96ad2"} Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.012815 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4q9m5" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.014245 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-4ndxm" event={"ID":"de128384-b923-4536-a485-33e65a1b7e04","Type":"ContainerStarted","Data":"b617ca787c04cfa1d65e7476aedd58afd478467d0b358e49b18b604631129436"} Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.014570 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-4ndxm" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.016284 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-xq5nz" event={"ID":"bd7497e1-afb6-44b5-8270-1021f837a65a","Type":"ContainerStarted","Data":"b9061eaf0783cbd2cba0a6e107f49421914d7d8b39f4804187ca2dd1bbcbac03"} Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.016610 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-xq5nz" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.018009 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4hrlz" event={"ID":"bda41d33-cd37-4c4d-99d6-3808993000b4","Type":"ContainerStarted","Data":"49bb9d40f48cc5192f719cb5ff828c798d3802d8a0bf6bec8d3d627b8cb1484d"} Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.018397 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4hrlz" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.019476 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jknjh" event={"ID":"79c1d3d0-ca85-4bbf-a7a7-74d260b5d4b1","Type":"ContainerStarted","Data":"50267b4164965f2fc34476684b3b0b0d81c5c940bb8d1a9eb0e105acbf45d710"} Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.019821 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jknjh" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.021262 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-kz2zn" event={"ID":"590654af-c639-4e9d-b821-c6caa1016695","Type":"ContainerStarted","Data":"9e3864674ef6fe202b85b741727442cf290c7be8379a0ecd3a08bc6c4a19ff97"} Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.021612 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-kz2zn" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.076340 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-d8nns" podStartSLOduration=2.880715941 podStartE2EDuration="16.076323622s" podCreationTimestamp="2026-02-02 07:02:49 +0000 UTC" firstStartedPulling="2026-02-02 07:02:50.610940492 +0000 UTC m=+995.988208404" lastFinishedPulling="2026-02-02 07:03:03.806548163 +0000 UTC m=+1009.183816085" observedRunningTime="2026-02-02 07:03:05.029412576 +0000 UTC m=+1010.406680488" watchObservedRunningTime="2026-02-02 07:03:05.076323622 +0000 UTC m=+1010.453591534" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.146352 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4zk9c" podStartSLOduration=3.8937158309999997 podStartE2EDuration="17.146336256s" podCreationTimestamp="2026-02-02 07:02:48 +0000 UTC" firstStartedPulling="2026-02-02 07:02:50.548677758 +0000 UTC m=+995.925945670" lastFinishedPulling="2026-02-02 07:03:03.801298173 +0000 UTC m=+1009.178566095" observedRunningTime="2026-02-02 07:03:05.076531707 +0000 UTC m=+1010.453799639" watchObservedRunningTime="2026-02-02 07:03:05.146336256 +0000 UTC m=+1010.523604168" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.146576 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-wpm9z" podStartSLOduration=3.955278608 podStartE2EDuration="17.146572622s" podCreationTimestamp="2026-02-02 07:02:48 +0000 UTC" firstStartedPulling="2026-02-02 07:02:50.614071759 +0000 UTC m=+995.991339671" lastFinishedPulling="2026-02-02 07:03:03.805365763 +0000 UTC m=+1009.182633685" observedRunningTime="2026-02-02 07:03:05.143868565 +0000 UTC m=+1010.521136477" watchObservedRunningTime="2026-02-02 07:03:05.146572622 +0000 UTC m=+1010.523840534" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.155982 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb\" (UID: \"5e7a9701-ed45-4289-8272-f850efbf1e75\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" Feb 02 07:03:05 crc kubenswrapper[4842]: E0202 07:03:05.158849 4842 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 07:03:05 crc kubenswrapper[4842]: E0202 07:03:05.158902 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert podName:5e7a9701-ed45-4289-8272-f850efbf1e75 nodeName:}" failed. No retries permitted until 2026-02-02 07:03:21.158884995 +0000 UTC m=+1026.536152907 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" (UID: "5e7a9701-ed45-4289-8272-f850efbf1e75") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.222986 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-nzz4p" podStartSLOduration=3.966528915 podStartE2EDuration="17.222970694s" podCreationTimestamp="2026-02-02 07:02:48 +0000 UTC" firstStartedPulling="2026-02-02 07:02:50.552738848 +0000 UTC m=+995.930006760" lastFinishedPulling="2026-02-02 07:03:03.809180617 +0000 UTC m=+1009.186448539" observedRunningTime="2026-02-02 07:03:05.181919873 +0000 UTC m=+1010.559187795" watchObservedRunningTime="2026-02-02 07:03:05.222970694 +0000 UTC m=+1010.600238606" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.224273 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-4ndxm" podStartSLOduration=5.529837268 podStartE2EDuration="16.224267646s" podCreationTimestamp="2026-02-02 07:02:49 +0000 UTC" firstStartedPulling="2026-02-02 07:02:53.111017007 +0000 UTC m=+998.488284929" lastFinishedPulling="2026-02-02 07:03:03.805447385 +0000 UTC m=+1009.182715307" observedRunningTime="2026-02-02 07:03:05.218496754 +0000 UTC m=+1010.595764666" watchObservedRunningTime="2026-02-02 07:03:05.224267646 +0000 UTC m=+1010.601535558" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.259805 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-kz2zn" podStartSLOduration=4.003777633 podStartE2EDuration="17.259788151s" podCreationTimestamp="2026-02-02 07:02:48 +0000 UTC" firstStartedPulling="2026-02-02 07:02:50.54912941 +0000 UTC m=+995.926397322" lastFinishedPulling="2026-02-02 07:03:03.805139928 +0000 UTC m=+1009.182407840" observedRunningTime="2026-02-02 07:03:05.255493035 +0000 UTC m=+1010.632760947" watchObservedRunningTime="2026-02-02 07:03:05.259788151 +0000 UTC m=+1010.637056063" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.308840 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nsf9v" podStartSLOduration=4.182488865 podStartE2EDuration="17.308824509s" podCreationTimestamp="2026-02-02 07:02:48 +0000 UTC" firstStartedPulling="2026-02-02 07:02:50.616490259 +0000 UTC m=+995.993758171" lastFinishedPulling="2026-02-02 07:03:03.742825893 +0000 UTC m=+1009.120093815" observedRunningTime="2026-02-02 07:03:05.303126009 +0000 UTC m=+1010.680393921" watchObservedRunningTime="2026-02-02 07:03:05.308824509 +0000 UTC m=+1010.686092421" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.389023 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-xq5nz" podStartSLOduration=8.97582006 podStartE2EDuration="17.389005124s" podCreationTimestamp="2026-02-02 07:02:48 +0000 UTC" firstStartedPulling="2026-02-02 07:02:49.803444821 +0000 UTC m=+995.180712733" lastFinishedPulling="2026-02-02 07:02:58.216629885 +0000 UTC m=+1003.593897797" observedRunningTime="2026-02-02 07:03:05.347772858 +0000 UTC m=+1010.725040770" watchObservedRunningTime="2026-02-02 07:03:05.389005124 +0000 UTC m=+1010.766273036" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.392185 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4hrlz" podStartSLOduration=10.596458992 podStartE2EDuration="17.392174612s" podCreationTimestamp="2026-02-02 07:02:48 +0000 UTC" firstStartedPulling="2026-02-02 07:02:49.827736349 +0000 UTC m=+995.205004261" lastFinishedPulling="2026-02-02 07:02:56.623451969 +0000 UTC m=+1002.000719881" observedRunningTime="2026-02-02 07:03:05.38436828 +0000 UTC m=+1010.761636192" watchObservedRunningTime="2026-02-02 07:03:05.392174612 +0000 UTC m=+1010.769442524" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.426118 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jknjh" podStartSLOduration=3.839838275 podStartE2EDuration="17.426103228s" podCreationTimestamp="2026-02-02 07:02:48 +0000 UTC" firstStartedPulling="2026-02-02 07:02:50.156614271 +0000 UTC m=+995.533882183" lastFinishedPulling="2026-02-02 07:03:03.742879214 +0000 UTC m=+1009.120147136" observedRunningTime="2026-02-02 07:03:05.420606222 +0000 UTC m=+1010.797874134" watchObservedRunningTime="2026-02-02 07:03:05.426103228 +0000 UTC m=+1010.803371140" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.458887 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-stkw6" podStartSLOduration=10.82209248 podStartE2EDuration="17.458868085s" podCreationTimestamp="2026-02-02 07:02:48 +0000 UTC" firstStartedPulling="2026-02-02 07:02:49.986673114 +0000 UTC m=+995.363941026" lastFinishedPulling="2026-02-02 07:02:56.623448719 +0000 UTC m=+1002.000716631" observedRunningTime="2026-02-02 07:03:05.452684633 +0000 UTC m=+1010.829952545" watchObservedRunningTime="2026-02-02 07:03:05.458868085 +0000 UTC m=+1010.836135997" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.486673 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-c9lwb" podStartSLOduration=4.231715227 podStartE2EDuration="17.486652399s" podCreationTimestamp="2026-02-02 07:02:48 +0000 UTC" firstStartedPulling="2026-02-02 07:02:50.550457782 +0000 UTC m=+995.927725694" lastFinishedPulling="2026-02-02 07:03:03.805394954 +0000 UTC m=+1009.182662866" observedRunningTime="2026-02-02 07:03:05.486517256 +0000 UTC m=+1010.863785168" watchObservedRunningTime="2026-02-02 07:03:05.486652399 +0000 UTC m=+1010.863920311" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.537996 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-lbjfv" podStartSLOduration=5.849944973 podStartE2EDuration="16.537977904s" podCreationTimestamp="2026-02-02 07:02:49 +0000 UTC" firstStartedPulling="2026-02-02 07:02:53.111493029 +0000 UTC m=+998.488760951" lastFinishedPulling="2026-02-02 07:03:03.79952596 +0000 UTC m=+1009.176793882" observedRunningTime="2026-02-02 07:03:05.537453341 +0000 UTC m=+1010.914721273" watchObservedRunningTime="2026-02-02 07:03:05.537977904 +0000 UTC m=+1010.915245806" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.585064 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-96sfj" podStartSLOduration=10.347083448 podStartE2EDuration="17.585046883s" podCreationTimestamp="2026-02-02 07:02:48 +0000 UTC" firstStartedPulling="2026-02-02 07:02:49.910758834 +0000 UTC m=+995.288026746" lastFinishedPulling="2026-02-02 07:02:57.148722269 +0000 UTC m=+1002.525990181" observedRunningTime="2026-02-02 07:03:05.583126936 +0000 UTC m=+1010.960394848" watchObservedRunningTime="2026-02-02 07:03:05.585046883 +0000 UTC m=+1010.962314795" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.614004 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4q9m5" podStartSLOduration=5.29528772 podStartE2EDuration="16.613991636s" podCreationTimestamp="2026-02-02 07:02:49 +0000 UTC" firstStartedPulling="2026-02-02 07:02:52.486554575 +0000 UTC m=+997.863822487" lastFinishedPulling="2026-02-02 07:03:03.805258481 +0000 UTC m=+1009.182526403" observedRunningTime="2026-02-02 07:03:05.612669384 +0000 UTC m=+1010.989937296" watchObservedRunningTime="2026-02-02 07:03:05.613991636 +0000 UTC m=+1010.991259548" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.639527 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-skdgw" podStartSLOduration=4.393588235 podStartE2EDuration="17.639510945s" podCreationTimestamp="2026-02-02 07:02:48 +0000 UTC" firstStartedPulling="2026-02-02 07:02:50.554748288 +0000 UTC m=+995.932016200" lastFinishedPulling="2026-02-02 07:03:03.800670978 +0000 UTC m=+1009.177938910" observedRunningTime="2026-02-02 07:03:05.637111736 +0000 UTC m=+1011.014379648" watchObservedRunningTime="2026-02-02 07:03:05.639510945 +0000 UTC m=+1011.016778847" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.669428 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-jmvqq" podStartSLOduration=6.520832225 podStartE2EDuration="17.669409741s" podCreationTimestamp="2026-02-02 07:02:48 +0000 UTC" firstStartedPulling="2026-02-02 07:02:50.142435701 +0000 UTC m=+995.519703613" lastFinishedPulling="2026-02-02 07:03:01.291013177 +0000 UTC m=+1006.668281129" observedRunningTime="2026-02-02 07:03:05.661536467 +0000 UTC m=+1011.038804379" watchObservedRunningTime="2026-02-02 07:03:05.669409741 +0000 UTC m=+1011.046677653" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.704672 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-q7vh6" podStartSLOduration=5.981290369 podStartE2EDuration="16.70465727s" podCreationTimestamp="2026-02-02 07:02:49 +0000 UTC" firstStartedPulling="2026-02-02 07:02:53.111426927 +0000 UTC m=+998.488694849" lastFinishedPulling="2026-02-02 07:03:03.834793838 +0000 UTC m=+1009.212061750" observedRunningTime="2026-02-02 07:03:05.700557309 +0000 UTC m=+1011.077825221" watchObservedRunningTime="2026-02-02 07:03:05.70465727 +0000 UTC m=+1011.081925182" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.878017 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.878098 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:03:05 crc kubenswrapper[4842]: E0202 07:03:05.878157 4842 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 07:03:05 crc kubenswrapper[4842]: E0202 07:03:05.878198 4842 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 07:03:05 crc kubenswrapper[4842]: E0202 07:03:05.878226 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs podName:6b1810ad-df0b-44b5-8ba8-953039b85411 nodeName:}" failed. No retries permitted until 2026-02-02 07:03:21.878201475 +0000 UTC m=+1027.255469387 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-bwmdm" (UID: "6b1810ad-df0b-44b5-8ba8-953039b85411") : secret "webhook-server-cert" not found Feb 02 07:03:05 crc kubenswrapper[4842]: E0202 07:03:05.878244 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs podName:6b1810ad-df0b-44b5-8ba8-953039b85411 nodeName:}" failed. No retries permitted until 2026-02-02 07:03:21.878233145 +0000 UTC m=+1027.255501057 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-bwmdm" (UID: "6b1810ad-df0b-44b5-8ba8-953039b85411") : secret "metrics-server-cert" not found Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.098870 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4hrlz" Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.099419 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-xq5nz" Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.141994 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-96sfj" Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.341971 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-stkw6" Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.354352 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-jmvqq" Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.363553 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jknjh" Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.467786 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-skdgw" Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.546877 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-kz2zn" Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.583847 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-nzz4p" Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.612646 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-c9lwb" Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.647771 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nsf9v" Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.667127 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4zk9c" Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.680867 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-wpm9z" Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.714896 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-d8nns" Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.782666 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-lbjfv" Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.845652 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-q7vh6" Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.914461 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4q9m5" Feb 02 07:03:10 crc kubenswrapper[4842]: I0202 07:03:10.075405 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-4ndxm" Feb 02 07:03:12 crc kubenswrapper[4842]: I0202 07:03:12.085865 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-qlxtv" event={"ID":"58dd3197-be46-474d-84f5-c066a9483a52","Type":"ContainerStarted","Data":"c2bb7d6b16e06976dbd7930e31a90a72bfecf22fad35c493144fb56e6d35e484"} Feb 02 07:03:12 crc kubenswrapper[4842]: I0202 07:03:12.115497 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-qlxtv" podStartSLOduration=2.797290336 podStartE2EDuration="23.115473909s" podCreationTimestamp="2026-02-02 07:02:49 +0000 UTC" firstStartedPulling="2026-02-02 07:02:50.767919719 +0000 UTC m=+996.145187631" lastFinishedPulling="2026-02-02 07:03:11.086103292 +0000 UTC m=+1016.463371204" observedRunningTime="2026-02-02 07:03:12.111727107 +0000 UTC m=+1017.488995079" watchObservedRunningTime="2026-02-02 07:03:12.115473909 +0000 UTC m=+1017.492741821" Feb 02 07:03:14 crc kubenswrapper[4842]: I0202 07:03:14.101770 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zbqhn" event={"ID":"1fffe017-3a94-4565-9778-ccea208aa8cc","Type":"ContainerStarted","Data":"6c3269643ef3bf6010400ce9141ac23c939307b53f5e5561fcba170a103c369c"} Feb 02 07:03:14 crc kubenswrapper[4842]: I0202 07:03:14.126084 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zbqhn" podStartSLOduration=2.770613809 podStartE2EDuration="25.126041676s" podCreationTimestamp="2026-02-02 07:02:49 +0000 UTC" firstStartedPulling="2026-02-02 07:02:50.763438089 +0000 UTC m=+996.140706001" lastFinishedPulling="2026-02-02 07:03:13.118865966 +0000 UTC m=+1018.496133868" observedRunningTime="2026-02-02 07:03:14.118601643 +0000 UTC m=+1019.495869555" watchObservedRunningTime="2026-02-02 07:03:14.126041676 +0000 UTC m=+1019.503309628" Feb 02 07:03:19 crc kubenswrapper[4842]: I0202 07:03:19.730806 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-qlxtv" Feb 02 07:03:19 crc kubenswrapper[4842]: I0202 07:03:19.733694 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-qlxtv" Feb 02 07:03:20 crc kubenswrapper[4842]: I0202 07:03:20.796832 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert\") pod \"infra-operator-controller-manager-79955696d6-b9qjw\" (UID: \"a020d6c0-e749-4442-93e8-64a4c463e9d5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" Feb 02 07:03:20 crc kubenswrapper[4842]: I0202 07:03:20.804768 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert\") pod \"infra-operator-controller-manager-79955696d6-b9qjw\" (UID: \"a020d6c0-e749-4442-93e8-64a4c463e9d5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" Feb 02 07:03:21 crc kubenswrapper[4842]: I0202 07:03:21.029904 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-867vq" Feb 02 07:03:21 crc kubenswrapper[4842]: I0202 07:03:21.038374 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" Feb 02 07:03:21 crc kubenswrapper[4842]: I0202 07:03:21.202388 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb\" (UID: \"5e7a9701-ed45-4289-8272-f850efbf1e75\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" Feb 02 07:03:21 crc kubenswrapper[4842]: I0202 07:03:21.211966 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb\" (UID: \"5e7a9701-ed45-4289-8272-f850efbf1e75\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" Feb 02 07:03:21 crc kubenswrapper[4842]: I0202 07:03:21.218630 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-4btph" Feb 02 07:03:21 crc kubenswrapper[4842]: I0202 07:03:21.227308 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" Feb 02 07:03:21 crc kubenswrapper[4842]: I0202 07:03:21.454184 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb"] Feb 02 07:03:21 crc kubenswrapper[4842]: I0202 07:03:21.525008 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw"] Feb 02 07:03:21 crc kubenswrapper[4842]: W0202 07:03:21.531247 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda020d6c0_e749_4442_93e8_64a4c463e9d5.slice/crio-d282524454d04afa92ca9bad9c8d8ab334b02c0655d84ce6c4600fb7cb37b2f3 WatchSource:0}: Error finding container d282524454d04afa92ca9bad9c8d8ab334b02c0655d84ce6c4600fb7cb37b2f3: Status 404 returned error can't find the container with id d282524454d04afa92ca9bad9c8d8ab334b02c0655d84ce6c4600fb7cb37b2f3 Feb 02 07:03:21 crc kubenswrapper[4842]: I0202 07:03:21.913610 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:03:21 crc kubenswrapper[4842]: I0202 07:03:21.914139 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:03:21 crc kubenswrapper[4842]: I0202 07:03:21.918691 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:03:21 crc kubenswrapper[4842]: I0202 07:03:21.919244 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:03:22 crc kubenswrapper[4842]: I0202 07:03:22.183783 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" event={"ID":"5e7a9701-ed45-4289-8272-f850efbf1e75","Type":"ContainerStarted","Data":"b6c3d4d33934752d92a00f58f8958167d1f85b985028cc89e6bd099f41ed776c"} Feb 02 07:03:22 crc kubenswrapper[4842]: I0202 07:03:22.185023 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" event={"ID":"a020d6c0-e749-4442-93e8-64a4c463e9d5","Type":"ContainerStarted","Data":"d282524454d04afa92ca9bad9c8d8ab334b02c0655d84ce6c4600fb7cb37b2f3"} Feb 02 07:03:22 crc kubenswrapper[4842]: I0202 07:03:22.208672 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-j9fct" Feb 02 07:03:22 crc kubenswrapper[4842]: I0202 07:03:22.217002 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:03:22 crc kubenswrapper[4842]: I0202 07:03:22.658438 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm"] Feb 02 07:03:24 crc kubenswrapper[4842]: I0202 07:03:24.204058 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" event={"ID":"a020d6c0-e749-4442-93e8-64a4c463e9d5","Type":"ContainerStarted","Data":"cb265626b6818b1fda83dd99bc60d3d3c5c79c30adce8ae720684c2777192fc3"} Feb 02 07:03:24 crc kubenswrapper[4842]: I0202 07:03:24.204488 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" Feb 02 07:03:24 crc kubenswrapper[4842]: I0202 07:03:24.206246 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" event={"ID":"6b1810ad-df0b-44b5-8ba8-953039b85411","Type":"ContainerStarted","Data":"83762abb9d1ff837f25e4edd01995bd03f53602bbca0538cb380c8eb1fe5c545"} Feb 02 07:03:24 crc kubenswrapper[4842]: I0202 07:03:24.206274 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" event={"ID":"6b1810ad-df0b-44b5-8ba8-953039b85411","Type":"ContainerStarted","Data":"d6acfb12136f98c0ee2b09e8104a7152c04c7d19670fca6dfe9d3607317a820f"} Feb 02 07:03:24 crc kubenswrapper[4842]: I0202 07:03:24.206400 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:03:24 crc kubenswrapper[4842]: I0202 07:03:24.207770 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" event={"ID":"5e7a9701-ed45-4289-8272-f850efbf1e75","Type":"ContainerStarted","Data":"fd7fa6c2a404fb6154becdfe49988ce67634b0f4531f1d2c95c37c52ba4e14a7"} Feb 02 07:03:24 crc kubenswrapper[4842]: I0202 07:03:24.208264 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" Feb 02 07:03:24 crc kubenswrapper[4842]: I0202 07:03:24.227841 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" podStartSLOduration=33.956867714 podStartE2EDuration="36.227813256s" podCreationTimestamp="2026-02-02 07:02:48 +0000 UTC" firstStartedPulling="2026-02-02 07:03:21.534075979 +0000 UTC m=+1026.911343891" lastFinishedPulling="2026-02-02 07:03:23.805021511 +0000 UTC m=+1029.182289433" observedRunningTime="2026-02-02 07:03:24.223424997 +0000 UTC m=+1029.600692919" watchObservedRunningTime="2026-02-02 07:03:24.227813256 +0000 UTC m=+1029.605081198" Feb 02 07:03:24 crc kubenswrapper[4842]: I0202 07:03:24.255596 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" podStartSLOduration=35.255576189 podStartE2EDuration="35.255576189s" podCreationTimestamp="2026-02-02 07:02:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:03:24.252402161 +0000 UTC m=+1029.629670083" watchObservedRunningTime="2026-02-02 07:03:24.255576189 +0000 UTC m=+1029.632844111" Feb 02 07:03:24 crc kubenswrapper[4842]: I0202 07:03:24.293911 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" podStartSLOduration=32.986494234 podStartE2EDuration="35.293886853s" podCreationTimestamp="2026-02-02 07:02:49 +0000 UTC" firstStartedPulling="2026-02-02 07:03:21.496267728 +0000 UTC m=+1026.873535640" lastFinishedPulling="2026-02-02 07:03:23.803660337 +0000 UTC m=+1029.180928259" observedRunningTime="2026-02-02 07:03:24.287758122 +0000 UTC m=+1029.665026034" watchObservedRunningTime="2026-02-02 07:03:24.293886853 +0000 UTC m=+1029.671154765" Feb 02 07:03:31 crc kubenswrapper[4842]: I0202 07:03:31.044799 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" Feb 02 07:03:31 crc kubenswrapper[4842]: I0202 07:03:31.236478 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" Feb 02 07:03:32 crc kubenswrapper[4842]: I0202 07:03:32.233264 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:03:42 crc kubenswrapper[4842]: I0202 07:03:42.146741 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:03:42 crc kubenswrapper[4842]: I0202 07:03:42.147405 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:03:50 crc kubenswrapper[4842]: I0202 07:03:50.882317 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-nnwvg"] Feb 02 07:03:50 crc kubenswrapper[4842]: I0202 07:03:50.884110 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-nnwvg" Feb 02 07:03:50 crc kubenswrapper[4842]: I0202 07:03:50.885858 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-h5n7q" Feb 02 07:03:50 crc kubenswrapper[4842]: I0202 07:03:50.888535 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 02 07:03:50 crc kubenswrapper[4842]: I0202 07:03:50.888878 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 02 07:03:50 crc kubenswrapper[4842]: I0202 07:03:50.889150 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 02 07:03:50 crc kubenswrapper[4842]: I0202 07:03:50.899721 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-nnwvg"] Feb 02 07:03:50 crc kubenswrapper[4842]: I0202 07:03:50.961202 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-nkfxn"] Feb 02 07:03:50 crc kubenswrapper[4842]: I0202 07:03:50.962197 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-nkfxn" Feb 02 07:03:50 crc kubenswrapper[4842]: I0202 07:03:50.964998 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.101761 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e957a502-d44b-4b06-97c1-e0d7c9d75865-config\") pod \"dnsmasq-dns-5f854695bc-nkfxn\" (UID: \"e957a502-d44b-4b06-97c1-e0d7c9d75865\") " pod="openstack/dnsmasq-dns-5f854695bc-nkfxn" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.101866 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e957a502-d44b-4b06-97c1-e0d7c9d75865-dns-svc\") pod \"dnsmasq-dns-5f854695bc-nkfxn\" (UID: \"e957a502-d44b-4b06-97c1-e0d7c9d75865\") " pod="openstack/dnsmasq-dns-5f854695bc-nkfxn" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.101936 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5mdp\" (UniqueName: \"kubernetes.io/projected/e957a502-d44b-4b06-97c1-e0d7c9d75865-kube-api-access-z5mdp\") pod \"dnsmasq-dns-5f854695bc-nkfxn\" (UID: \"e957a502-d44b-4b06-97c1-e0d7c9d75865\") " pod="openstack/dnsmasq-dns-5f854695bc-nkfxn" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.131319 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-nkfxn"] Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.203616 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e957a502-d44b-4b06-97c1-e0d7c9d75865-dns-svc\") pod \"dnsmasq-dns-5f854695bc-nkfxn\" (UID: \"e957a502-d44b-4b06-97c1-e0d7c9d75865\") " pod="openstack/dnsmasq-dns-5f854695bc-nkfxn" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.203867 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5mdp\" (UniqueName: \"kubernetes.io/projected/e957a502-d44b-4b06-97c1-e0d7c9d75865-kube-api-access-z5mdp\") pod \"dnsmasq-dns-5f854695bc-nkfxn\" (UID: \"e957a502-d44b-4b06-97c1-e0d7c9d75865\") " pod="openstack/dnsmasq-dns-5f854695bc-nkfxn" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.203961 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f9p2\" (UniqueName: \"kubernetes.io/projected/bc463aa5-6e00-466a-8cba-7d1370a7c79b-kube-api-access-6f9p2\") pod \"dnsmasq-dns-84bb9d8bd9-nnwvg\" (UID: \"bc463aa5-6e00-466a-8cba-7d1370a7c79b\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-nnwvg" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.204114 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc463aa5-6e00-466a-8cba-7d1370a7c79b-config\") pod \"dnsmasq-dns-84bb9d8bd9-nnwvg\" (UID: \"bc463aa5-6e00-466a-8cba-7d1370a7c79b\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-nnwvg" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.204264 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e957a502-d44b-4b06-97c1-e0d7c9d75865-config\") pod \"dnsmasq-dns-5f854695bc-nkfxn\" (UID: \"e957a502-d44b-4b06-97c1-e0d7c9d75865\") " pod="openstack/dnsmasq-dns-5f854695bc-nkfxn" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.205199 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e957a502-d44b-4b06-97c1-e0d7c9d75865-dns-svc\") pod \"dnsmasq-dns-5f854695bc-nkfxn\" (UID: \"e957a502-d44b-4b06-97c1-e0d7c9d75865\") " pod="openstack/dnsmasq-dns-5f854695bc-nkfxn" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.205235 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e957a502-d44b-4b06-97c1-e0d7c9d75865-config\") pod \"dnsmasq-dns-5f854695bc-nkfxn\" (UID: \"e957a502-d44b-4b06-97c1-e0d7c9d75865\") " pod="openstack/dnsmasq-dns-5f854695bc-nkfxn" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.222209 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5mdp\" (UniqueName: \"kubernetes.io/projected/e957a502-d44b-4b06-97c1-e0d7c9d75865-kube-api-access-z5mdp\") pod \"dnsmasq-dns-5f854695bc-nkfxn\" (UID: \"e957a502-d44b-4b06-97c1-e0d7c9d75865\") " pod="openstack/dnsmasq-dns-5f854695bc-nkfxn" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.305300 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc463aa5-6e00-466a-8cba-7d1370a7c79b-config\") pod \"dnsmasq-dns-84bb9d8bd9-nnwvg\" (UID: \"bc463aa5-6e00-466a-8cba-7d1370a7c79b\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-nnwvg" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.305645 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f9p2\" (UniqueName: \"kubernetes.io/projected/bc463aa5-6e00-466a-8cba-7d1370a7c79b-kube-api-access-6f9p2\") pod \"dnsmasq-dns-84bb9d8bd9-nnwvg\" (UID: \"bc463aa5-6e00-466a-8cba-7d1370a7c79b\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-nnwvg" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.306724 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc463aa5-6e00-466a-8cba-7d1370a7c79b-config\") pod \"dnsmasq-dns-84bb9d8bd9-nnwvg\" (UID: \"bc463aa5-6e00-466a-8cba-7d1370a7c79b\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-nnwvg" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.323358 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f9p2\" (UniqueName: \"kubernetes.io/projected/bc463aa5-6e00-466a-8cba-7d1370a7c79b-kube-api-access-6f9p2\") pod \"dnsmasq-dns-84bb9d8bd9-nnwvg\" (UID: \"bc463aa5-6e00-466a-8cba-7d1370a7c79b\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-nnwvg" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.428518 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-nnwvg" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.456833 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-nkfxn" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.704708 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-nnwvg"] Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.740563 4842 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.955617 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-nkfxn"] Feb 02 07:03:51 crc kubenswrapper[4842]: W0202 07:03:51.958366 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode957a502_d44b_4b06_97c1_e0d7c9d75865.slice/crio-a73d47ab78f64b8b040e07ad9764e19630bd5e8dcd1d54e7b40a33a598434b5d WatchSource:0}: Error finding container a73d47ab78f64b8b040e07ad9764e19630bd5e8dcd1d54e7b40a33a598434b5d: Status 404 returned error can't find the container with id a73d47ab78f64b8b040e07ad9764e19630bd5e8dcd1d54e7b40a33a598434b5d Feb 02 07:03:52 crc kubenswrapper[4842]: I0202 07:03:52.438629 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-nnwvg" event={"ID":"bc463aa5-6e00-466a-8cba-7d1370a7c79b","Type":"ContainerStarted","Data":"43b019fa43de3914a140a52df26f02dc7038a30388bbfbca8f30181349c5a701"} Feb 02 07:03:52 crc kubenswrapper[4842]: I0202 07:03:52.440547 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-nkfxn" event={"ID":"e957a502-d44b-4b06-97c1-e0d7c9d75865","Type":"ContainerStarted","Data":"a73d47ab78f64b8b040e07ad9764e19630bd5e8dcd1d54e7b40a33a598434b5d"} Feb 02 07:03:52 crc kubenswrapper[4842]: I0202 07:03:52.706553 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-nkfxn"] Feb 02 07:03:52 crc kubenswrapper[4842]: I0202 07:03:52.740983 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-v87kh"] Feb 02 07:03:52 crc kubenswrapper[4842]: I0202 07:03:52.742376 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" Feb 02 07:03:52 crc kubenswrapper[4842]: I0202 07:03:52.753208 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-v87kh"] Feb 02 07:03:52 crc kubenswrapper[4842]: I0202 07:03:52.923510 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b03422f3-6220-40a9-b410-390213ff282e-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-v87kh\" (UID: \"b03422f3-6220-40a9-b410-390213ff282e\") " pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" Feb 02 07:03:52 crc kubenswrapper[4842]: I0202 07:03:52.923548 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b03422f3-6220-40a9-b410-390213ff282e-config\") pod \"dnsmasq-dns-744ffd65bc-v87kh\" (UID: \"b03422f3-6220-40a9-b410-390213ff282e\") " pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" Feb 02 07:03:52 crc kubenswrapper[4842]: I0202 07:03:52.923569 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhzgm\" (UniqueName: \"kubernetes.io/projected/b03422f3-6220-40a9-b410-390213ff282e-kube-api-access-zhzgm\") pod \"dnsmasq-dns-744ffd65bc-v87kh\" (UID: \"b03422f3-6220-40a9-b410-390213ff282e\") " pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.024567 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b03422f3-6220-40a9-b410-390213ff282e-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-v87kh\" (UID: \"b03422f3-6220-40a9-b410-390213ff282e\") " pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.024618 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b03422f3-6220-40a9-b410-390213ff282e-config\") pod \"dnsmasq-dns-744ffd65bc-v87kh\" (UID: \"b03422f3-6220-40a9-b410-390213ff282e\") " pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.024639 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhzgm\" (UniqueName: \"kubernetes.io/projected/b03422f3-6220-40a9-b410-390213ff282e-kube-api-access-zhzgm\") pod \"dnsmasq-dns-744ffd65bc-v87kh\" (UID: \"b03422f3-6220-40a9-b410-390213ff282e\") " pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.025785 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b03422f3-6220-40a9-b410-390213ff282e-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-v87kh\" (UID: \"b03422f3-6220-40a9-b410-390213ff282e\") " pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.026273 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b03422f3-6220-40a9-b410-390213ff282e-config\") pod \"dnsmasq-dns-744ffd65bc-v87kh\" (UID: \"b03422f3-6220-40a9-b410-390213ff282e\") " pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.048327 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhzgm\" (UniqueName: \"kubernetes.io/projected/b03422f3-6220-40a9-b410-390213ff282e-kube-api-access-zhzgm\") pod \"dnsmasq-dns-744ffd65bc-v87kh\" (UID: \"b03422f3-6220-40a9-b410-390213ff282e\") " pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.077251 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.378256 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-nnwvg"] Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.394209 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-k5tj8"] Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.396179 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.402003 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-k5tj8"] Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.532320 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11728eb4-1f90-43b9-a299-1c906e4445a2-dns-svc\") pod \"dnsmasq-dns-95f5f6995-k5tj8\" (UID: \"11728eb4-1f90-43b9-a299-1c906e4445a2\") " pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.532433 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4hpx\" (UniqueName: \"kubernetes.io/projected/11728eb4-1f90-43b9-a299-1c906e4445a2-kube-api-access-s4hpx\") pod \"dnsmasq-dns-95f5f6995-k5tj8\" (UID: \"11728eb4-1f90-43b9-a299-1c906e4445a2\") " pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.532490 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11728eb4-1f90-43b9-a299-1c906e4445a2-config\") pod \"dnsmasq-dns-95f5f6995-k5tj8\" (UID: \"11728eb4-1f90-43b9-a299-1c906e4445a2\") " pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.552409 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-v87kh"] Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.635065 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11728eb4-1f90-43b9-a299-1c906e4445a2-config\") pod \"dnsmasq-dns-95f5f6995-k5tj8\" (UID: \"11728eb4-1f90-43b9-a299-1c906e4445a2\") " pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.635208 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11728eb4-1f90-43b9-a299-1c906e4445a2-dns-svc\") pod \"dnsmasq-dns-95f5f6995-k5tj8\" (UID: \"11728eb4-1f90-43b9-a299-1c906e4445a2\") " pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.635289 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4hpx\" (UniqueName: \"kubernetes.io/projected/11728eb4-1f90-43b9-a299-1c906e4445a2-kube-api-access-s4hpx\") pod \"dnsmasq-dns-95f5f6995-k5tj8\" (UID: \"11728eb4-1f90-43b9-a299-1c906e4445a2\") " pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.636321 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11728eb4-1f90-43b9-a299-1c906e4445a2-dns-svc\") pod \"dnsmasq-dns-95f5f6995-k5tj8\" (UID: \"11728eb4-1f90-43b9-a299-1c906e4445a2\") " pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.636335 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11728eb4-1f90-43b9-a299-1c906e4445a2-config\") pod \"dnsmasq-dns-95f5f6995-k5tj8\" (UID: \"11728eb4-1f90-43b9-a299-1c906e4445a2\") " pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.676895 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4hpx\" (UniqueName: \"kubernetes.io/projected/11728eb4-1f90-43b9-a299-1c906e4445a2-kube-api-access-s4hpx\") pod \"dnsmasq-dns-95f5f6995-k5tj8\" (UID: \"11728eb4-1f90-43b9-a299-1c906e4445a2\") " pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.728167 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.841904 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.842987 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.847145 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.847366 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.847483 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.848813 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.851386 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.852262 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.854382 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.854459 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-p5ttv" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.951332 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.951696 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-config-data\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.951738 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.951764 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ttm4\" (UniqueName: \"kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-kube-api-access-9ttm4\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.951789 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.951807 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.951844 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.952072 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.952098 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.952123 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.952138 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.053869 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.053931 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-config-data\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.053959 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.053994 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ttm4\" (UniqueName: \"kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-kube-api-access-9ttm4\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.054057 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.054083 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.054431 4842 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.054730 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.054828 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.054853 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-config-data\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.054980 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.055018 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.055060 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.055082 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.055153 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.055719 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.057063 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.058406 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.059079 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.059118 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.059271 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.068702 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ttm4\" (UniqueName: \"kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-kube-api-access-9ttm4\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.074243 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.165543 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.169165 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-k5tj8"] Feb 02 07:03:54 crc kubenswrapper[4842]: W0202 07:03:54.199916 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11728eb4_1f90_43b9_a299_1c906e4445a2.slice/crio-9eb7e583c84ecb63143f0d1ddff31d06b60ec73935bf9ce5848ad1097f8ea606 WatchSource:0}: Error finding container 9eb7e583c84ecb63143f0d1ddff31d06b60ec73935bf9ce5848ad1097f8ea606: Status 404 returned error can't find the container with id 9eb7e583c84ecb63143f0d1ddff31d06b60ec73935bf9ce5848ad1097f8ea606 Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.459775 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" event={"ID":"11728eb4-1f90-43b9-a299-1c906e4445a2","Type":"ContainerStarted","Data":"9eb7e583c84ecb63143f0d1ddff31d06b60ec73935bf9ce5848ad1097f8ea606"} Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.461253 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" event={"ID":"b03422f3-6220-40a9-b410-390213ff282e","Type":"ContainerStarted","Data":"8546f85ea074aefba993cdb0bf6ad37f1ca8e108781983b99c2bd584652a33a1"} Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.514233 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.515344 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.517602 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.517707 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-lt4fp" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.519628 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.520532 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.520771 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.520976 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.521122 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.528624 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.569706 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 07:03:54 crc kubenswrapper[4842]: W0202 07:03:54.575898 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b2ca532_dbbc_4148_8d2f_fc474685f0bd.slice/crio-63d0cfdfa17eb71cf318213bce11d52e23291a7b7ab17f960100e6c0aabd0b83 WatchSource:0}: Error finding container 63d0cfdfa17eb71cf318213bce11d52e23291a7b7ab17f960100e6c0aabd0b83: Status 404 returned error can't find the container with id 63d0cfdfa17eb71cf318213bce11d52e23291a7b7ab17f960100e6c0aabd0b83 Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.672638 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.672877 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n8dl\" (UniqueName: \"kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-kube-api-access-9n8dl\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.672903 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.672936 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.672952 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.672969 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.672989 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.673005 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.673022 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.673061 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/441d47f7-e5dd-456f-b6fa-10a642be6742-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.673076 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/441d47f7-e5dd-456f-b6fa-10a642be6742-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.774645 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/441d47f7-e5dd-456f-b6fa-10a642be6742-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.774685 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/441d47f7-e5dd-456f-b6fa-10a642be6742-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.774739 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.774757 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n8dl\" (UniqueName: \"kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-kube-api-access-9n8dl\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.774780 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.774814 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.774828 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.774843 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.774860 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.774881 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.774898 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.775802 4842 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.775828 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.776538 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.776847 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.778503 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.779072 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.786862 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.789963 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/441d47f7-e5dd-456f-b6fa-10a642be6742-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.790955 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.791070 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/441d47f7-e5dd-456f-b6fa-10a642be6742-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.809404 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n8dl\" (UniqueName: \"kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-kube-api-access-9n8dl\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.816964 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.858397 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:55 crc kubenswrapper[4842]: I0202 07:03:55.414647 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 07:03:55 crc kubenswrapper[4842]: W0202 07:03:55.425669 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod441d47f7_e5dd_456f_b6fa_10a642be6742.slice/crio-f125ead6f6ca269886544c12b159c6f5309a094d04f426e2da08b9aef5bc513c WatchSource:0}: Error finding container f125ead6f6ca269886544c12b159c6f5309a094d04f426e2da08b9aef5bc513c: Status 404 returned error can't find the container with id f125ead6f6ca269886544c12b159c6f5309a094d04f426e2da08b9aef5bc513c Feb 02 07:03:55 crc kubenswrapper[4842]: I0202 07:03:55.472188 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"441d47f7-e5dd-456f-b6fa-10a642be6742","Type":"ContainerStarted","Data":"f125ead6f6ca269886544c12b159c6f5309a094d04f426e2da08b9aef5bc513c"} Feb 02 07:03:55 crc kubenswrapper[4842]: I0202 07:03:55.473452 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2b2ca532-dbbc-4148-8d2f-fc474685f0bd","Type":"ContainerStarted","Data":"63d0cfdfa17eb71cf318213bce11d52e23291a7b7ab17f960100e6c0aabd0b83"} Feb 02 07:03:55 crc kubenswrapper[4842]: I0202 07:03:55.870306 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 02 07:03:55 crc kubenswrapper[4842]: I0202 07:03:55.873403 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 07:03:55 crc kubenswrapper[4842]: I0202 07:03:55.877874 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-xfhgf" Feb 02 07:03:55 crc kubenswrapper[4842]: I0202 07:03:55.877953 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 02 07:03:55 crc kubenswrapper[4842]: I0202 07:03:55.878046 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 02 07:03:55 crc kubenswrapper[4842]: I0202 07:03:55.881877 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 07:03:55 crc kubenswrapper[4842]: I0202 07:03:55.883889 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 02 07:03:55 crc kubenswrapper[4842]: I0202 07:03:55.884601 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.000977 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-kolla-config\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.001056 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/709c39fb-802f-4690-89f6-41a717e7244c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.001094 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.001137 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-848c6\" (UniqueName: \"kubernetes.io/projected/709c39fb-802f-4690-89f6-41a717e7244c-kube-api-access-848c6\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.001157 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-config-data-default\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.001237 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/709c39fb-802f-4690-89f6-41a717e7244c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.001279 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/709c39fb-802f-4690-89f6-41a717e7244c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.001319 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.102128 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-config-data-default\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.102196 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/709c39fb-802f-4690-89f6-41a717e7244c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.102244 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/709c39fb-802f-4690-89f6-41a717e7244c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.102272 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.102340 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-kolla-config\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.102372 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/709c39fb-802f-4690-89f6-41a717e7244c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.102419 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.102445 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-848c6\" (UniqueName: \"kubernetes.io/projected/709c39fb-802f-4690-89f6-41a717e7244c-kube-api-access-848c6\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.103042 4842 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.103451 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-config-data-default\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.103789 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/709c39fb-802f-4690-89f6-41a717e7244c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.104205 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-kolla-config\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.107015 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.117722 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/709c39fb-802f-4690-89f6-41a717e7244c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.120327 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/709c39fb-802f-4690-89f6-41a717e7244c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.124890 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-848c6\" (UniqueName: \"kubernetes.io/projected/709c39fb-802f-4690-89f6-41a717e7244c-kube-api-access-848c6\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.139551 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.208301 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.729335 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 07:03:56 crc kubenswrapper[4842]: W0202 07:03:56.737501 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod709c39fb_802f_4690_89f6_41a717e7244c.slice/crio-b0c718acbfc7b29da36fd02c7d5b494cfe5ffb0fab4eeaa9d4ac6e1362b5ae3e WatchSource:0}: Error finding container b0c718acbfc7b29da36fd02c7d5b494cfe5ffb0fab4eeaa9d4ac6e1362b5ae3e: Status 404 returned error can't find the container with id b0c718acbfc7b29da36fd02c7d5b494cfe5ffb0fab4eeaa9d4ac6e1362b5ae3e Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.366251 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.368373 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.371788 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-glnh2" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.371936 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.372096 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.372273 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.392339 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.507351 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"709c39fb-802f-4690-89f6-41a717e7244c","Type":"ContainerStarted","Data":"b0c718acbfc7b29da36fd02c7d5b494cfe5ffb0fab4eeaa9d4ac6e1362b5ae3e"} Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.537298 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.537377 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bed4dadb-b854-4082-b18a-67f58543bb9a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.537420 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.537445 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.537470 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bed4dadb-b854-4082-b18a-67f58543bb9a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.537496 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b6r6\" (UniqueName: \"kubernetes.io/projected/bed4dadb-b854-4082-b18a-67f58543bb9a-kube-api-access-8b6r6\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.537514 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.537540 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed4dadb-b854-4082-b18a-67f58543bb9a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.638714 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.638777 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.638812 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bed4dadb-b854-4082-b18a-67f58543bb9a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.638853 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b6r6\" (UniqueName: \"kubernetes.io/projected/bed4dadb-b854-4082-b18a-67f58543bb9a-kube-api-access-8b6r6\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.638870 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.638894 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed4dadb-b854-4082-b18a-67f58543bb9a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.638948 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.638984 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bed4dadb-b854-4082-b18a-67f58543bb9a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.640009 4842 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.640201 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.640533 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bed4dadb-b854-4082-b18a-67f58543bb9a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.640880 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.641025 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.646457 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed4dadb-b854-4082-b18a-67f58543bb9a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.649151 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bed4dadb-b854-4082-b18a-67f58543bb9a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.667595 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.668458 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.672757 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.672894 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.672956 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-krkzp" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.690438 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b6r6\" (UniqueName: \"kubernetes.io/projected/bed4dadb-b854-4082-b18a-67f58543bb9a-kube-api-access-8b6r6\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.692353 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.711264 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.745960 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e4d672b-cb7a-406d-ab62-12745f300ef0-config-data\") pod \"memcached-0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " pod="openstack/memcached-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.746003 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngbgx\" (UniqueName: \"kubernetes.io/projected/2e4d672b-cb7a-406d-ab62-12745f300ef0-kube-api-access-ngbgx\") pod \"memcached-0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " pod="openstack/memcached-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.746022 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e4d672b-cb7a-406d-ab62-12745f300ef0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " pod="openstack/memcached-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.746080 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2e4d672b-cb7a-406d-ab62-12745f300ef0-kolla-config\") pod \"memcached-0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " pod="openstack/memcached-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.746110 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4d672b-cb7a-406d-ab62-12745f300ef0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " pod="openstack/memcached-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.847536 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e4d672b-cb7a-406d-ab62-12745f300ef0-config-data\") pod \"memcached-0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " pod="openstack/memcached-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.847587 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngbgx\" (UniqueName: \"kubernetes.io/projected/2e4d672b-cb7a-406d-ab62-12745f300ef0-kube-api-access-ngbgx\") pod \"memcached-0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " pod="openstack/memcached-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.847617 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e4d672b-cb7a-406d-ab62-12745f300ef0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " pod="openstack/memcached-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.847701 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2e4d672b-cb7a-406d-ab62-12745f300ef0-kolla-config\") pod \"memcached-0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " pod="openstack/memcached-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.847733 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4d672b-cb7a-406d-ab62-12745f300ef0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " pod="openstack/memcached-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.848457 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2e4d672b-cb7a-406d-ab62-12745f300ef0-kolla-config\") pod \"memcached-0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " pod="openstack/memcached-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.848457 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e4d672b-cb7a-406d-ab62-12745f300ef0-config-data\") pod \"memcached-0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " pod="openstack/memcached-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.851566 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e4d672b-cb7a-406d-ab62-12745f300ef0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " pod="openstack/memcached-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.855974 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4d672b-cb7a-406d-ab62-12745f300ef0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " pod="openstack/memcached-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.873565 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngbgx\" (UniqueName: \"kubernetes.io/projected/2e4d672b-cb7a-406d-ab62-12745f300ef0-kube-api-access-ngbgx\") pod \"memcached-0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " pod="openstack/memcached-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.998106 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:58 crc kubenswrapper[4842]: I0202 07:03:58.061547 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 07:03:59 crc kubenswrapper[4842]: I0202 07:03:59.543371 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 07:03:59 crc kubenswrapper[4842]: I0202 07:03:59.544731 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 07:03:59 crc kubenswrapper[4842]: I0202 07:03:59.546642 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-g5fgs" Feb 02 07:03:59 crc kubenswrapper[4842]: I0202 07:03:59.558808 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 07:03:59 crc kubenswrapper[4842]: I0202 07:03:59.675122 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vmlv\" (UniqueName: \"kubernetes.io/projected/0d9bebc9-9e67-4019-bdf8-22e78dfc3d14-kube-api-access-2vmlv\") pod \"kube-state-metrics-0\" (UID: \"0d9bebc9-9e67-4019-bdf8-22e78dfc3d14\") " pod="openstack/kube-state-metrics-0" Feb 02 07:03:59 crc kubenswrapper[4842]: I0202 07:03:59.776056 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vmlv\" (UniqueName: \"kubernetes.io/projected/0d9bebc9-9e67-4019-bdf8-22e78dfc3d14-kube-api-access-2vmlv\") pod \"kube-state-metrics-0\" (UID: \"0d9bebc9-9e67-4019-bdf8-22e78dfc3d14\") " pod="openstack/kube-state-metrics-0" Feb 02 07:03:59 crc kubenswrapper[4842]: I0202 07:03:59.794604 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vmlv\" (UniqueName: \"kubernetes.io/projected/0d9bebc9-9e67-4019-bdf8-22e78dfc3d14-kube-api-access-2vmlv\") pod \"kube-state-metrics-0\" (UID: \"0d9bebc9-9e67-4019-bdf8-22e78dfc3d14\") " pod="openstack/kube-state-metrics-0" Feb 02 07:03:59 crc kubenswrapper[4842]: I0202 07:03:59.872154 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.446263 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sgwrm"] Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.447800 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.453076 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-vctt8"] Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.455042 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.460523 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.460642 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.460693 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-wv7db" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.468012 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sgwrm"] Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.492572 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-vctt8"] Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.532187 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-run-ovn\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.532240 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-run\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.532261 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e467a49f-fdc1-4a9e-9907-4425f5ec6177-scripts\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.532289 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-run\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.532310 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw7kx\" (UniqueName: \"kubernetes.io/projected/e467a49f-fdc1-4a9e-9907-4425f5ec6177-kube-api-access-hw7kx\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.532486 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-log\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.532670 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-log-ovn\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.532689 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-etc-ovs\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.532713 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-lib\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.532792 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce6d1a00-c27b-418e-afa9-01c8c7802127-scripts\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.532859 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e467a49f-fdc1-4a9e-9907-4425f5ec6177-ovn-controller-tls-certs\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.532900 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lfhd\" (UniqueName: \"kubernetes.io/projected/ce6d1a00-c27b-418e-afa9-01c8c7802127-kube-api-access-6lfhd\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.532918 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e467a49f-fdc1-4a9e-9907-4425f5ec6177-combined-ca-bundle\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.634395 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce6d1a00-c27b-418e-afa9-01c8c7802127-scripts\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.634456 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e467a49f-fdc1-4a9e-9907-4425f5ec6177-ovn-controller-tls-certs\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.634524 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lfhd\" (UniqueName: \"kubernetes.io/projected/ce6d1a00-c27b-418e-afa9-01c8c7802127-kube-api-access-6lfhd\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.634548 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e467a49f-fdc1-4a9e-9907-4425f5ec6177-combined-ca-bundle\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.634625 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-run-ovn\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.634648 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-run\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.634671 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e467a49f-fdc1-4a9e-9907-4425f5ec6177-scripts\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.634708 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-run\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.634738 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw7kx\" (UniqueName: \"kubernetes.io/projected/e467a49f-fdc1-4a9e-9907-4425f5ec6177-kube-api-access-hw7kx\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.634784 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-log\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.634805 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-log-ovn\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.634826 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-etc-ovs\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.634853 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-lib\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.635446 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-lib\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.638312 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e467a49f-fdc1-4a9e-9907-4425f5ec6177-scripts\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.638315 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce6d1a00-c27b-418e-afa9-01c8c7802127-scripts\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.638549 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-log\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.638694 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-run\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.639132 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-run-ovn\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.639236 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-run\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.639340 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-log-ovn\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.639483 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-etc-ovs\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.651905 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e467a49f-fdc1-4a9e-9907-4425f5ec6177-combined-ca-bundle\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.655024 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e467a49f-fdc1-4a9e-9907-4425f5ec6177-ovn-controller-tls-certs\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.655810 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lfhd\" (UniqueName: \"kubernetes.io/projected/ce6d1a00-c27b-418e-afa9-01c8c7802127-kube-api-access-6lfhd\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.681155 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw7kx\" (UniqueName: \"kubernetes.io/projected/e467a49f-fdc1-4a9e-9907-4425f5ec6177-kube-api-access-hw7kx\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.720397 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.721493 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.724572 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.726199 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.733467 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.733802 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-qt89x" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.738139 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.747602 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.775742 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.795089 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.837399 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.837465 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.837492 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.837514 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.837533 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.837581 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxl6n\" (UniqueName: \"kubernetes.io/projected/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-kube-api-access-pxl6n\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.837610 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-config\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.837641 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.938745 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxl6n\" (UniqueName: \"kubernetes.io/projected/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-kube-api-access-pxl6n\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.938824 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-config\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.938866 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.938896 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.938949 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.938981 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.938998 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.939021 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.939383 4842 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.939473 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.940332 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.941265 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-config\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.951986 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.952006 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.952085 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.957051 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.958857 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxl6n\" (UniqueName: \"kubernetes.io/projected/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-kube-api-access-pxl6n\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:04 crc kubenswrapper[4842]: I0202 07:04:04.037661 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.039691 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.042578 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.046446 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.047675 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-r55ms" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.047861 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.050561 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.058565 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.101123 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.101178 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a31583c1-5fde-4763-a889-7257255fa217-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.101203 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a31583c1-5fde-4763-a889-7257255fa217-config\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.101237 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.101261 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzd26\" (UniqueName: \"kubernetes.io/projected/a31583c1-5fde-4763-a889-7257255fa217-kube-api-access-pzd26\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.101488 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.101591 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a31583c1-5fde-4763-a889-7257255fa217-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.101786 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.209629 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.210047 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a31583c1-5fde-4763-a889-7257255fa217-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.210124 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.210205 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.210290 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a31583c1-5fde-4763-a889-7257255fa217-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.210333 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a31583c1-5fde-4763-a889-7257255fa217-config\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.210366 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.210410 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzd26\" (UniqueName: \"kubernetes.io/projected/a31583c1-5fde-4763-a889-7257255fa217-kube-api-access-pzd26\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.210559 4842 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.211064 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a31583c1-5fde-4763-a889-7257255fa217-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.211706 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a31583c1-5fde-4763-a889-7257255fa217-config\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.211767 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a31583c1-5fde-4763-a889-7257255fa217-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.216569 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.223158 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.225201 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzd26\" (UniqueName: \"kubernetes.io/projected/a31583c1-5fde-4763-a889-7257255fa217-kube-api-access-pzd26\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.230742 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.243935 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.365534 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:11 crc kubenswrapper[4842]: E0202 07:04:11.631986 4842 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Feb 02 07:04:11 crc kubenswrapper[4842]: E0202 07:04:11.632551 4842 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9n8dl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(441d47f7-e5dd-456f-b6fa-10a642be6742): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 07:04:11 crc kubenswrapper[4842]: E0202 07:04:11.633713 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="441d47f7-e5dd-456f-b6fa-10a642be6742" Feb 02 07:04:12 crc kubenswrapper[4842]: I0202 07:04:12.145932 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:04:12 crc kubenswrapper[4842]: I0202 07:04:12.146265 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.537753 4842 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.537973 4842 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5mdp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5f854695bc-nkfxn_openstack(e957a502-d44b-4b06-97c1-e0d7c9d75865): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.539160 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5f854695bc-nkfxn" podUID="e957a502-d44b-4b06-97c1-e0d7c9d75865" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.560116 4842 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.560409 4842 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9ttm4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(2b2ca532-dbbc-4148-8d2f-fc474685f0bd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.561786 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="2b2ca532-dbbc-4148-8d2f-fc474685f0bd" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.574947 4842 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.575125 4842 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhzgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-744ffd65bc-v87kh_openstack(b03422f3-6220-40a9-b410-390213ff282e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.576420 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" podUID="b03422f3-6220-40a9-b410-390213ff282e" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.583130 4842 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.583283 4842 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6f9p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-84bb9d8bd9-nnwvg_openstack(bc463aa5-6e00-466a-8cba-7d1370a7c79b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.583346 4842 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.583523 4842 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s4hpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-95f5f6995-k5tj8_openstack(11728eb4-1f90-43b9-a299-1c906e4445a2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.584765 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-84bb9d8bd9-nnwvg" podUID="bc463aa5-6e00-466a-8cba-7d1370a7c79b" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.585465 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" podUID="11728eb4-1f90-43b9-a299-1c906e4445a2" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.639045 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" podUID="b03422f3-6220-40a9-b410-390213ff282e" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.639323 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="441d47f7-e5dd-456f-b6fa-10a642be6742" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.639368 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-server-0" podUID="2b2ca532-dbbc-4148-8d2f-fc474685f0bd" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.639411 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" podUID="11728eb4-1f90-43b9-a299-1c906e4445a2" Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.151087 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-nnwvg" Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.178255 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-nkfxn" Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.266296 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5mdp\" (UniqueName: \"kubernetes.io/projected/e957a502-d44b-4b06-97c1-e0d7c9d75865-kube-api-access-z5mdp\") pod \"e957a502-d44b-4b06-97c1-e0d7c9d75865\" (UID: \"e957a502-d44b-4b06-97c1-e0d7c9d75865\") " Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.266421 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e957a502-d44b-4b06-97c1-e0d7c9d75865-config\") pod \"e957a502-d44b-4b06-97c1-e0d7c9d75865\" (UID: \"e957a502-d44b-4b06-97c1-e0d7c9d75865\") " Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.266472 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc463aa5-6e00-466a-8cba-7d1370a7c79b-config\") pod \"bc463aa5-6e00-466a-8cba-7d1370a7c79b\" (UID: \"bc463aa5-6e00-466a-8cba-7d1370a7c79b\") " Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.266519 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e957a502-d44b-4b06-97c1-e0d7c9d75865-dns-svc\") pod \"e957a502-d44b-4b06-97c1-e0d7c9d75865\" (UID: \"e957a502-d44b-4b06-97c1-e0d7c9d75865\") " Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.266545 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f9p2\" (UniqueName: \"kubernetes.io/projected/bc463aa5-6e00-466a-8cba-7d1370a7c79b-kube-api-access-6f9p2\") pod \"bc463aa5-6e00-466a-8cba-7d1370a7c79b\" (UID: \"bc463aa5-6e00-466a-8cba-7d1370a7c79b\") " Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.267292 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e957a502-d44b-4b06-97c1-e0d7c9d75865-config" (OuterVolumeSpecName: "config") pod "e957a502-d44b-4b06-97c1-e0d7c9d75865" (UID: "e957a502-d44b-4b06-97c1-e0d7c9d75865"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.267606 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc463aa5-6e00-466a-8cba-7d1370a7c79b-config" (OuterVolumeSpecName: "config") pod "bc463aa5-6e00-466a-8cba-7d1370a7c79b" (UID: "bc463aa5-6e00-466a-8cba-7d1370a7c79b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.267887 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e957a502-d44b-4b06-97c1-e0d7c9d75865-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e957a502-d44b-4b06-97c1-e0d7c9d75865" (UID: "e957a502-d44b-4b06-97c1-e0d7c9d75865"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.272049 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc463aa5-6e00-466a-8cba-7d1370a7c79b-kube-api-access-6f9p2" (OuterVolumeSpecName: "kube-api-access-6f9p2") pod "bc463aa5-6e00-466a-8cba-7d1370a7c79b" (UID: "bc463aa5-6e00-466a-8cba-7d1370a7c79b"). InnerVolumeSpecName "kube-api-access-6f9p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.273407 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e957a502-d44b-4b06-97c1-e0d7c9d75865-kube-api-access-z5mdp" (OuterVolumeSpecName: "kube-api-access-z5mdp") pod "e957a502-d44b-4b06-97c1-e0d7c9d75865" (UID: "e957a502-d44b-4b06-97c1-e0d7c9d75865"). InnerVolumeSpecName "kube-api-access-z5mdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.368170 4842 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e957a502-d44b-4b06-97c1-e0d7c9d75865-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.368232 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f9p2\" (UniqueName: \"kubernetes.io/projected/bc463aa5-6e00-466a-8cba-7d1370a7c79b-kube-api-access-6f9p2\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.368246 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5mdp\" (UniqueName: \"kubernetes.io/projected/e957a502-d44b-4b06-97c1-e0d7c9d75865-kube-api-access-z5mdp\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.368261 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e957a502-d44b-4b06-97c1-e0d7c9d75865-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.368271 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc463aa5-6e00-466a-8cba-7d1370a7c79b-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.449304 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.512956 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sgwrm"] Feb 02 07:04:16 crc kubenswrapper[4842]: W0202 07:04:16.516248 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode467a49f_fdc1_4a9e_9907_4425f5ec6177.slice/crio-e22d47c5687c2823a538f3e86888cac139c920a3eeed02648ed069882ffa70ad WatchSource:0}: Error finding container e22d47c5687c2823a538f3e86888cac139c920a3eeed02648ed069882ffa70ad: Status 404 returned error can't find the container with id e22d47c5687c2823a538f3e86888cac139c920a3eeed02648ed069882ffa70ad Feb 02 07:04:16 crc kubenswrapper[4842]: W0202 07:04:16.517533 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbed4dadb_b854_4082_b18a_67f58543bb9a.slice/crio-fdc6e41336cf566f37f5d6d1c8f0d838d650c8a494fb96e4662f58397bbe8dbd WatchSource:0}: Error finding container fdc6e41336cf566f37f5d6d1c8f0d838d650c8a494fb96e4662f58397bbe8dbd: Status 404 returned error can't find the container with id fdc6e41336cf566f37f5d6d1c8f0d838d650c8a494fb96e4662f58397bbe8dbd Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.518902 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.642136 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.650922 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.701814 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"2e4d672b-cb7a-406d-ab62-12745f300ef0","Type":"ContainerStarted","Data":"ccad06562fb6f40d062777e6d3a6e4d9830ae7a447085c52c329d40fd37ced11"} Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.703570 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bed4dadb-b854-4082-b18a-67f58543bb9a","Type":"ContainerStarted","Data":"29807641fcc1ca11bd99ef7a60eab40eeea4379d7aa3a9b641c81ec27d1ba950"} Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.703628 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bed4dadb-b854-4082-b18a-67f58543bb9a","Type":"ContainerStarted","Data":"fdc6e41336cf566f37f5d6d1c8f0d838d650c8a494fb96e4662f58397bbe8dbd"} Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.705840 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"709c39fb-802f-4690-89f6-41a717e7244c","Type":"ContainerStarted","Data":"97ba3917d42f55e5202587bc21acaf8c4c98f2515894b36ef8743fca56ae4a0d"} Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.706886 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0d9bebc9-9e67-4019-bdf8-22e78dfc3d14","Type":"ContainerStarted","Data":"db5e53906e871ace039a809b4c17e0f0a9393b7521bbea23546882f45795c673"} Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.708000 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a31583c1-5fde-4763-a889-7257255fa217","Type":"ContainerStarted","Data":"1455920f56b035102336b6030ca95115000c538e6e505a3b940faf00be0a7147"} Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.708803 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-nkfxn" event={"ID":"e957a502-d44b-4b06-97c1-e0d7c9d75865","Type":"ContainerDied","Data":"a73d47ab78f64b8b040e07ad9764e19630bd5e8dcd1d54e7b40a33a598434b5d"} Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.708865 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-nkfxn" Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.711176 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-nnwvg" event={"ID":"bc463aa5-6e00-466a-8cba-7d1370a7c79b","Type":"ContainerDied","Data":"43b019fa43de3914a140a52df26f02dc7038a30388bbfbca8f30181349c5a701"} Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.711291 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-nnwvg" Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.714329 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sgwrm" event={"ID":"e467a49f-fdc1-4a9e-9907-4425f5ec6177","Type":"ContainerStarted","Data":"e22d47c5687c2823a538f3e86888cac139c920a3eeed02648ed069882ffa70ad"} Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.746002 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.813806 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-nnwvg"] Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.826252 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-nnwvg"] Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.852952 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-nkfxn"] Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.861392 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-nkfxn"] Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.874624 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-vctt8"] Feb 02 07:04:17 crc kubenswrapper[4842]: I0202 07:04:17.444864 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc463aa5-6e00-466a-8cba-7d1370a7c79b" path="/var/lib/kubelet/pods/bc463aa5-6e00-466a-8cba-7d1370a7c79b/volumes" Feb 02 07:04:17 crc kubenswrapper[4842]: I0202 07:04:17.445316 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e957a502-d44b-4b06-97c1-e0d7c9d75865" path="/var/lib/kubelet/pods/e957a502-d44b-4b06-97c1-e0d7c9d75865/volumes" Feb 02 07:04:17 crc kubenswrapper[4842]: I0202 07:04:17.738558 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vctt8" event={"ID":"ce6d1a00-c27b-418e-afa9-01c8c7802127","Type":"ContainerStarted","Data":"20790a3e9ff5cd63d4fa516d28e246cafad534d4d8104c6a1f16eb5a3c586904"} Feb 02 07:04:17 crc kubenswrapper[4842]: I0202 07:04:17.740575 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bff6dd37-52b7-41b4-bc15-4f6436cdabc7","Type":"ContainerStarted","Data":"0b86eb955efed6c0beae4754f7a259bd87ec4d6377bfa3532f73d18514ea5e3d"} Feb 02 07:04:19 crc kubenswrapper[4842]: I0202 07:04:19.757592 4842 generic.go:334] "Generic (PLEG): container finished" podID="709c39fb-802f-4690-89f6-41a717e7244c" containerID="97ba3917d42f55e5202587bc21acaf8c4c98f2515894b36ef8743fca56ae4a0d" exitCode=0 Feb 02 07:04:19 crc kubenswrapper[4842]: I0202 07:04:19.757647 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"709c39fb-802f-4690-89f6-41a717e7244c","Type":"ContainerDied","Data":"97ba3917d42f55e5202587bc21acaf8c4c98f2515894b36ef8743fca56ae4a0d"} Feb 02 07:04:20 crc kubenswrapper[4842]: I0202 07:04:20.767760 4842 generic.go:334] "Generic (PLEG): container finished" podID="bed4dadb-b854-4082-b18a-67f58543bb9a" containerID="29807641fcc1ca11bd99ef7a60eab40eeea4379d7aa3a9b641c81ec27d1ba950" exitCode=0 Feb 02 07:04:20 crc kubenswrapper[4842]: I0202 07:04:20.767822 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bed4dadb-b854-4082-b18a-67f58543bb9a","Type":"ContainerDied","Data":"29807641fcc1ca11bd99ef7a60eab40eeea4379d7aa3a9b641c81ec27d1ba950"} Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.787485 4842 generic.go:334] "Generic (PLEG): container finished" podID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerID="0e2b21c37cc6f772bef7c4e80d3e6f156ca0d9772f52dfdc03a69fbc57f8dd8b" exitCode=0 Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.787573 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vctt8" event={"ID":"ce6d1a00-c27b-418e-afa9-01c8c7802127","Type":"ContainerDied","Data":"0e2b21c37cc6f772bef7c4e80d3e6f156ca0d9772f52dfdc03a69fbc57f8dd8b"} Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.792195 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"709c39fb-802f-4690-89f6-41a717e7244c","Type":"ContainerStarted","Data":"c560cf8ca46605a269f576b719a4cf3ca939b8e2744573792764df19d7522c8c"} Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.795060 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bff6dd37-52b7-41b4-bc15-4f6436cdabc7","Type":"ContainerStarted","Data":"c1acee4708434e2281340e86c5dcc1aec94647c18fa79ec17661ad1f08020e9f"} Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.796559 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0d9bebc9-9e67-4019-bdf8-22e78dfc3d14","Type":"ContainerStarted","Data":"7ef2e70ff07365f726387024ecff0fabe2cd2d02cae00c3b439c9a6c10f2e47d"} Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.796803 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.801923 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a31583c1-5fde-4763-a889-7257255fa217","Type":"ContainerStarted","Data":"6cd00133afde786f3f39678d68f6c38b74703143640c9ef32412c8efe7f5aec9"} Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.803460 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sgwrm" event={"ID":"e467a49f-fdc1-4a9e-9907-4425f5ec6177","Type":"ContainerStarted","Data":"42408d707e9e2078b40d0e9f4ce34644fc07f209b2994b218bbf5f92d1f39ea7"} Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.803565 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.818919 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"2e4d672b-cb7a-406d-ab62-12745f300ef0","Type":"ContainerStarted","Data":"95018804c3eeb98d3bc4dd01533eb47f23f9335fb411951096ec1c046e6c00c4"} Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.819115 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.821727 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bed4dadb-b854-4082-b18a-67f58543bb9a","Type":"ContainerStarted","Data":"6befc904ad1bc362edb2452ad98dace7a8d19908d934b410bdb62de4fb72339d"} Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.831549 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-sgwrm" podStartSLOduration=15.174366101 podStartE2EDuration="19.831528672s" podCreationTimestamp="2026-02-02 07:04:03 +0000 UTC" firstStartedPulling="2026-02-02 07:04:16.518048531 +0000 UTC m=+1081.895316443" lastFinishedPulling="2026-02-02 07:04:21.175211092 +0000 UTC m=+1086.552479014" observedRunningTime="2026-02-02 07:04:22.829966374 +0000 UTC m=+1088.207234306" watchObservedRunningTime="2026-02-02 07:04:22.831528672 +0000 UTC m=+1088.208796584" Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.891225 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=18.593767379 podStartE2EDuration="23.891193142s" podCreationTimestamp="2026-02-02 07:03:59 +0000 UTC" firstStartedPulling="2026-02-02 07:04:16.663349101 +0000 UTC m=+1082.040617013" lastFinishedPulling="2026-02-02 07:04:21.960774864 +0000 UTC m=+1087.338042776" observedRunningTime="2026-02-02 07:04:22.885266836 +0000 UTC m=+1088.262534748" watchObservedRunningTime="2026-02-02 07:04:22.891193142 +0000 UTC m=+1088.268461054" Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.917382 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.702933604 podStartE2EDuration="28.917358407s" podCreationTimestamp="2026-02-02 07:03:54 +0000 UTC" firstStartedPulling="2026-02-02 07:03:56.739980764 +0000 UTC m=+1062.117248676" lastFinishedPulling="2026-02-02 07:04:15.954405547 +0000 UTC m=+1081.331673479" observedRunningTime="2026-02-02 07:04:22.912075076 +0000 UTC m=+1088.289342988" watchObservedRunningTime="2026-02-02 07:04:22.917358407 +0000 UTC m=+1088.294626319" Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.945958 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=26.945942511 podStartE2EDuration="26.945942511s" podCreationTimestamp="2026-02-02 07:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:04:22.93617953 +0000 UTC m=+1088.313447462" watchObservedRunningTime="2026-02-02 07:04:22.945942511 +0000 UTC m=+1088.323210423" Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.981993 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=21.181974688 podStartE2EDuration="25.981978518s" podCreationTimestamp="2026-02-02 07:03:57 +0000 UTC" firstStartedPulling="2026-02-02 07:04:16.453990293 +0000 UTC m=+1081.831258205" lastFinishedPulling="2026-02-02 07:04:21.253994103 +0000 UTC m=+1086.631262035" observedRunningTime="2026-02-02 07:04:22.981922687 +0000 UTC m=+1088.359190599" watchObservedRunningTime="2026-02-02 07:04:22.981978518 +0000 UTC m=+1088.359246430" Feb 02 07:04:23 crc kubenswrapper[4842]: I0202 07:04:23.831628 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vctt8" event={"ID":"ce6d1a00-c27b-418e-afa9-01c8c7802127","Type":"ContainerStarted","Data":"a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c"} Feb 02 07:04:24 crc kubenswrapper[4842]: I0202 07:04:24.850274 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vctt8" event={"ID":"ce6d1a00-c27b-418e-afa9-01c8c7802127","Type":"ContainerStarted","Data":"3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e"} Feb 02 07:04:24 crc kubenswrapper[4842]: I0202 07:04:24.850707 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:24 crc kubenswrapper[4842]: I0202 07:04:24.850743 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:24 crc kubenswrapper[4842]: I0202 07:04:24.853241 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bff6dd37-52b7-41b4-bc15-4f6436cdabc7","Type":"ContainerStarted","Data":"12cbd4046092af30937f505c373f7a1da7ef6152e4425d8dee20e3b127f7d573"} Feb 02 07:04:24 crc kubenswrapper[4842]: I0202 07:04:24.855020 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a31583c1-5fde-4763-a889-7257255fa217","Type":"ContainerStarted","Data":"c2eb9657c42f955c0263cd3a4cee2ba4741ed6bed3e4fa84ae9f59564a660266"} Feb 02 07:04:24 crc kubenswrapper[4842]: I0202 07:04:24.876865 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-vctt8" podStartSLOduration=17.361547779 podStartE2EDuration="21.876834205s" podCreationTimestamp="2026-02-02 07:04:03 +0000 UTC" firstStartedPulling="2026-02-02 07:04:16.871790925 +0000 UTC m=+1082.249058837" lastFinishedPulling="2026-02-02 07:04:21.387077321 +0000 UTC m=+1086.764345263" observedRunningTime="2026-02-02 07:04:24.87213347 +0000 UTC m=+1090.249401442" watchObservedRunningTime="2026-02-02 07:04:24.876834205 +0000 UTC m=+1090.254102157" Feb 02 07:04:24 crc kubenswrapper[4842]: I0202 07:04:24.905466 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=11.862718095 podStartE2EDuration="18.90544596s" podCreationTimestamp="2026-02-02 07:04:06 +0000 UTC" firstStartedPulling="2026-02-02 07:04:16.664743735 +0000 UTC m=+1082.042011647" lastFinishedPulling="2026-02-02 07:04:23.70747156 +0000 UTC m=+1089.084739512" observedRunningTime="2026-02-02 07:04:24.900788946 +0000 UTC m=+1090.278056868" watchObservedRunningTime="2026-02-02 07:04:24.90544596 +0000 UTC m=+1090.282713892" Feb 02 07:04:24 crc kubenswrapper[4842]: I0202 07:04:24.933266 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=15.973020733 podStartE2EDuration="22.933198594s" podCreationTimestamp="2026-02-02 07:04:02 +0000 UTC" firstStartedPulling="2026-02-02 07:04:16.755346727 +0000 UTC m=+1082.132614629" lastFinishedPulling="2026-02-02 07:04:23.715524578 +0000 UTC m=+1089.092792490" observedRunningTime="2026-02-02 07:04:24.924835978 +0000 UTC m=+1090.302103900" watchObservedRunningTime="2026-02-02 07:04:24.933198594 +0000 UTC m=+1090.310466536" Feb 02 07:04:25 crc kubenswrapper[4842]: I0202 07:04:25.038930 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:25 crc kubenswrapper[4842]: I0202 07:04:25.091093 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:25 crc kubenswrapper[4842]: E0202 07:04:25.126838 4842 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.169:55736->38.102.83.169:45991: read tcp 38.102.83.169:55736->38.102.83.169:45991: read: connection reset by peer Feb 02 07:04:25 crc kubenswrapper[4842]: I0202 07:04:25.366312 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:25 crc kubenswrapper[4842]: I0202 07:04:25.420790 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:25 crc kubenswrapper[4842]: I0202 07:04:25.862377 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:25 crc kubenswrapper[4842]: I0202 07:04:25.862665 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:26 crc kubenswrapper[4842]: I0202 07:04:26.209099 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 02 07:04:26 crc kubenswrapper[4842]: I0202 07:04:26.209153 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 02 07:04:26 crc kubenswrapper[4842]: I0202 07:04:26.872299 4842 generic.go:334] "Generic (PLEG): container finished" podID="11728eb4-1f90-43b9-a299-1c906e4445a2" containerID="6a444f5c393af32e08e046b64f123d9623635f8c3e21df30a65d0ce53326ee48" exitCode=0 Feb 02 07:04:26 crc kubenswrapper[4842]: I0202 07:04:26.872423 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" event={"ID":"11728eb4-1f90-43b9-a299-1c906e4445a2","Type":"ContainerDied","Data":"6a444f5c393af32e08e046b64f123d9623635f8c3e21df30a65d0ce53326ee48"} Feb 02 07:04:26 crc kubenswrapper[4842]: I0202 07:04:26.874310 4842 generic.go:334] "Generic (PLEG): container finished" podID="b03422f3-6220-40a9-b410-390213ff282e" containerID="1cda8b1bf4ec8b85bb8b44964c087214d362549894f3526896346652a3603d53" exitCode=0 Feb 02 07:04:26 crc kubenswrapper[4842]: I0202 07:04:26.874687 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" event={"ID":"b03422f3-6220-40a9-b410-390213ff282e","Type":"ContainerDied","Data":"1cda8b1bf4ec8b85bb8b44964c087214d362549894f3526896346652a3603d53"} Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.420277 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.639506 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-v87kh"] Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.674763 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-794868bd45-ljcbj"] Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.675988 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.678844 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.709140 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-4glck"] Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.710056 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.711612 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.719550 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-4glck"] Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.753414 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-ljcbj"] Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.772605 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a768c72b-df6d-463e-b085-996d7b910985-config\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.772664 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-dns-svc\") pod \"dnsmasq-dns-794868bd45-ljcbj\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.772724 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a768c72b-df6d-463e-b085-996d7b910985-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.772771 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a768c72b-df6d-463e-b085-996d7b910985-combined-ca-bundle\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.772825 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-config\") pod \"dnsmasq-dns-794868bd45-ljcbj\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.772888 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6hzc\" (UniqueName: \"kubernetes.io/projected/50ef0678-fa8e-46f0-87b3-d4cd540ca293-kube-api-access-w6hzc\") pod \"dnsmasq-dns-794868bd45-ljcbj\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.772909 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a768c72b-df6d-463e-b085-996d7b910985-ovs-rundir\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.772937 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a768c72b-df6d-463e-b085-996d7b910985-ovn-rundir\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.772973 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-ovsdbserver-sb\") pod \"dnsmasq-dns-794868bd45-ljcbj\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.772998 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h79wj\" (UniqueName: \"kubernetes.io/projected/a768c72b-df6d-463e-b085-996d7b910985-kube-api-access-h79wj\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.874793 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6hzc\" (UniqueName: \"kubernetes.io/projected/50ef0678-fa8e-46f0-87b3-d4cd540ca293-kube-api-access-w6hzc\") pod \"dnsmasq-dns-794868bd45-ljcbj\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.874847 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a768c72b-df6d-463e-b085-996d7b910985-ovs-rundir\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.874866 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a768c72b-df6d-463e-b085-996d7b910985-ovn-rundir\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.874885 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-ovsdbserver-sb\") pod \"dnsmasq-dns-794868bd45-ljcbj\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.874902 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h79wj\" (UniqueName: \"kubernetes.io/projected/a768c72b-df6d-463e-b085-996d7b910985-kube-api-access-h79wj\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.874926 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a768c72b-df6d-463e-b085-996d7b910985-config\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.874942 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-dns-svc\") pod \"dnsmasq-dns-794868bd45-ljcbj\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.874972 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a768c72b-df6d-463e-b085-996d7b910985-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.874996 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a768c72b-df6d-463e-b085-996d7b910985-combined-ca-bundle\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.875038 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-config\") pod \"dnsmasq-dns-794868bd45-ljcbj\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.875803 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-config\") pod \"dnsmasq-dns-794868bd45-ljcbj\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.875882 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a768c72b-df6d-463e-b085-996d7b910985-ovs-rundir\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.875947 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a768c72b-df6d-463e-b085-996d7b910985-ovn-rundir\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.877977 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-dns-svc\") pod \"dnsmasq-dns-794868bd45-ljcbj\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.880309 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a768c72b-df6d-463e-b085-996d7b910985-config\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.883130 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-ovsdbserver-sb\") pod \"dnsmasq-dns-794868bd45-ljcbj\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.890300 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a768c72b-df6d-463e-b085-996d7b910985-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.895880 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a768c72b-df6d-463e-b085-996d7b910985-combined-ca-bundle\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.900838 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h79wj\" (UniqueName: \"kubernetes.io/projected/a768c72b-df6d-463e-b085-996d7b910985-kube-api-access-h79wj\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.906698 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"441d47f7-e5dd-456f-b6fa-10a642be6742","Type":"ContainerStarted","Data":"15488c5f14bed733c354b136f5f9b0303d01f42120de21fa2a655d19a2d681ef"} Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.910130 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6hzc\" (UniqueName: \"kubernetes.io/projected/50ef0678-fa8e-46f0-87b3-d4cd540ca293-kube-api-access-w6hzc\") pod \"dnsmasq-dns-794868bd45-ljcbj\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.911609 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-k5tj8"] Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.923026 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" podUID="11728eb4-1f90-43b9-a299-1c906e4445a2" containerName="dnsmasq-dns" containerID="cri-o://42633a7d89c5f8c71cff3452e39e653b67b256211568367dfffae9330cfcf686" gracePeriod=10 Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.922926 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" event={"ID":"11728eb4-1f90-43b9-a299-1c906e4445a2","Type":"ContainerStarted","Data":"42633a7d89c5f8c71cff3452e39e653b67b256211568367dfffae9330cfcf686"} Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.924361 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.935900 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" podUID="b03422f3-6220-40a9-b410-390213ff282e" containerName="dnsmasq-dns" containerID="cri-o://03672f6ca0d8d8d06d6bbefa3ee0d1a92af4902782ce05f0150e6dfe78e8e26a" gracePeriod=10 Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.936128 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" event={"ID":"b03422f3-6220-40a9-b410-390213ff282e","Type":"ContainerStarted","Data":"03672f6ca0d8d8d06d6bbefa3ee0d1a92af4902782ce05f0150e6dfe78e8e26a"} Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.936173 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.936184 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-tttsf"] Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.937463 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.939607 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.962437 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-tttsf"] Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.964728 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" podStartSLOduration=3.690413982 podStartE2EDuration="35.96471067s" podCreationTimestamp="2026-02-02 07:03:52 +0000 UTC" firstStartedPulling="2026-02-02 07:03:53.602826977 +0000 UTC m=+1058.980094889" lastFinishedPulling="2026-02-02 07:04:25.877123625 +0000 UTC m=+1091.254391577" observedRunningTime="2026-02-02 07:04:27.955194085 +0000 UTC m=+1093.332461997" watchObservedRunningTime="2026-02-02 07:04:27.96471067 +0000 UTC m=+1093.341978582" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.977579 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" podStartSLOduration=-9223372001.877216 podStartE2EDuration="34.977559206s" podCreationTimestamp="2026-02-02 07:03:53 +0000 UTC" firstStartedPulling="2026-02-02 07:03:54.202549229 +0000 UTC m=+1059.579817141" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:04:27.974442319 +0000 UTC m=+1093.351710231" watchObservedRunningTime="2026-02-02 07:04:27.977559206 +0000 UTC m=+1093.354827118" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.992619 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.999032 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.999071 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.030792 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.063451 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.078499 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-tttsf\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.078581 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-tttsf\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.078636 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-tttsf\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.078664 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2k8s\" (UniqueName: \"kubernetes.io/projected/5a75411c-41b6-4e66-9c29-5dd8e5de211a-kube-api-access-k2k8s\") pod \"dnsmasq-dns-757dc6fff9-tttsf\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.078794 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-config\") pod \"dnsmasq-dns-757dc6fff9-tttsf\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.105401 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.181866 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-tttsf\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.181987 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-tttsf\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.182039 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-tttsf\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.182065 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2k8s\" (UniqueName: \"kubernetes.io/projected/5a75411c-41b6-4e66-9c29-5dd8e5de211a-kube-api-access-k2k8s\") pod \"dnsmasq-dns-757dc6fff9-tttsf\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.182183 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-config\") pod \"dnsmasq-dns-757dc6fff9-tttsf\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.183862 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-tttsf\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.184634 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-config\") pod \"dnsmasq-dns-757dc6fff9-tttsf\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.184650 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-tttsf\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.184820 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-tttsf\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.208745 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2k8s\" (UniqueName: \"kubernetes.io/projected/5a75411c-41b6-4e66-9c29-5dd8e5de211a-kube-api-access-k2k8s\") pod \"dnsmasq-dns-757dc6fff9-tttsf\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.258946 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.393961 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.450928 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.486676 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11728eb4-1f90-43b9-a299-1c906e4445a2-config\") pod \"11728eb4-1f90-43b9-a299-1c906e4445a2\" (UID: \"11728eb4-1f90-43b9-a299-1c906e4445a2\") " Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.487004 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11728eb4-1f90-43b9-a299-1c906e4445a2-dns-svc\") pod \"11728eb4-1f90-43b9-a299-1c906e4445a2\" (UID: \"11728eb4-1f90-43b9-a299-1c906e4445a2\") " Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.487128 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4hpx\" (UniqueName: \"kubernetes.io/projected/11728eb4-1f90-43b9-a299-1c906e4445a2-kube-api-access-s4hpx\") pod \"11728eb4-1f90-43b9-a299-1c906e4445a2\" (UID: \"11728eb4-1f90-43b9-a299-1c906e4445a2\") " Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.491557 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11728eb4-1f90-43b9-a299-1c906e4445a2-kube-api-access-s4hpx" (OuterVolumeSpecName: "kube-api-access-s4hpx") pod "11728eb4-1f90-43b9-a299-1c906e4445a2" (UID: "11728eb4-1f90-43b9-a299-1c906e4445a2"). InnerVolumeSpecName "kube-api-access-s4hpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.530120 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11728eb4-1f90-43b9-a299-1c906e4445a2-config" (OuterVolumeSpecName: "config") pod "11728eb4-1f90-43b9-a299-1c906e4445a2" (UID: "11728eb4-1f90-43b9-a299-1c906e4445a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.530870 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11728eb4-1f90-43b9-a299-1c906e4445a2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "11728eb4-1f90-43b9-a299-1c906e4445a2" (UID: "11728eb4-1f90-43b9-a299-1c906e4445a2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.588638 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b03422f3-6220-40a9-b410-390213ff282e-config\") pod \"b03422f3-6220-40a9-b410-390213ff282e\" (UID: \"b03422f3-6220-40a9-b410-390213ff282e\") " Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.588780 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhzgm\" (UniqueName: \"kubernetes.io/projected/b03422f3-6220-40a9-b410-390213ff282e-kube-api-access-zhzgm\") pod \"b03422f3-6220-40a9-b410-390213ff282e\" (UID: \"b03422f3-6220-40a9-b410-390213ff282e\") " Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.588837 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b03422f3-6220-40a9-b410-390213ff282e-dns-svc\") pod \"b03422f3-6220-40a9-b410-390213ff282e\" (UID: \"b03422f3-6220-40a9-b410-390213ff282e\") " Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.589443 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11728eb4-1f90-43b9-a299-1c906e4445a2-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.589457 4842 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11728eb4-1f90-43b9-a299-1c906e4445a2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.589467 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4hpx\" (UniqueName: \"kubernetes.io/projected/11728eb4-1f90-43b9-a299-1c906e4445a2-kube-api-access-s4hpx\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:28 crc kubenswrapper[4842]: W0202 07:04:28.592505 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda768c72b_df6d_463e_b085_996d7b910985.slice/crio-3895bf2e90ce68029a65e13b1b0d09c0d18f1338f9ff1f7787b7a618bced51a5 WatchSource:0}: Error finding container 3895bf2e90ce68029a65e13b1b0d09c0d18f1338f9ff1f7787b7a618bced51a5: Status 404 returned error can't find the container with id 3895bf2e90ce68029a65e13b1b0d09c0d18f1338f9ff1f7787b7a618bced51a5 Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.596304 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b03422f3-6220-40a9-b410-390213ff282e-kube-api-access-zhzgm" (OuterVolumeSpecName: "kube-api-access-zhzgm") pod "b03422f3-6220-40a9-b410-390213ff282e" (UID: "b03422f3-6220-40a9-b410-390213ff282e"). InnerVolumeSpecName "kube-api-access-zhzgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.596388 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-4glck"] Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.610676 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-ljcbj"] Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.627585 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b03422f3-6220-40a9-b410-390213ff282e-config" (OuterVolumeSpecName: "config") pod "b03422f3-6220-40a9-b410-390213ff282e" (UID: "b03422f3-6220-40a9-b410-390213ff282e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.633196 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b03422f3-6220-40a9-b410-390213ff282e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b03422f3-6220-40a9-b410-390213ff282e" (UID: "b03422f3-6220-40a9-b410-390213ff282e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.691616 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b03422f3-6220-40a9-b410-390213ff282e-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.691650 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhzgm\" (UniqueName: \"kubernetes.io/projected/b03422f3-6220-40a9-b410-390213ff282e-kube-api-access-zhzgm\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.691662 4842 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b03422f3-6220-40a9-b410-390213ff282e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.806995 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-tttsf"] Feb 02 07:04:28 crc kubenswrapper[4842]: W0202 07:04:28.850759 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a75411c_41b6_4e66_9c29_5dd8e5de211a.slice/crio-6b3b3bd6441f4b536256f6e5decf016c5300a5522fe6fb39834290d77db0d594 WatchSource:0}: Error finding container 6b3b3bd6441f4b536256f6e5decf016c5300a5522fe6fb39834290d77db0d594: Status 404 returned error can't find the container with id 6b3b3bd6441f4b536256f6e5decf016c5300a5522fe6fb39834290d77db0d594 Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.942555 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2b2ca532-dbbc-4148-8d2f-fc474685f0bd","Type":"ContainerStarted","Data":"6c31731dd55c0106a8a51f84c9feb372cb01a4a0f209022835cbd8f0c40ce80b"} Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.943577 4842 generic.go:334] "Generic (PLEG): container finished" podID="50ef0678-fa8e-46f0-87b3-d4cd540ca293" containerID="9256a22e336903a02a75fd334630a8b5dba0a0037c179f024a9a59492a8a565b" exitCode=0 Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.943622 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-ljcbj" event={"ID":"50ef0678-fa8e-46f0-87b3-d4cd540ca293","Type":"ContainerDied","Data":"9256a22e336903a02a75fd334630a8b5dba0a0037c179f024a9a59492a8a565b"} Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.943636 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-ljcbj" event={"ID":"50ef0678-fa8e-46f0-87b3-d4cd540ca293","Type":"ContainerStarted","Data":"5ea515418db439b7b85e9f81e72d96b594a2f4593445c0e76fd6508fbe9dc808"} Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.949022 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" event={"ID":"5a75411c-41b6-4e66-9c29-5dd8e5de211a","Type":"ContainerStarted","Data":"6b3b3bd6441f4b536256f6e5decf016c5300a5522fe6fb39834290d77db0d594"} Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.975521 4842 generic.go:334] "Generic (PLEG): container finished" podID="b03422f3-6220-40a9-b410-390213ff282e" containerID="03672f6ca0d8d8d06d6bbefa3ee0d1a92af4902782ce05f0150e6dfe78e8e26a" exitCode=0 Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.975586 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" event={"ID":"b03422f3-6220-40a9-b410-390213ff282e","Type":"ContainerDied","Data":"03672f6ca0d8d8d06d6bbefa3ee0d1a92af4902782ce05f0150e6dfe78e8e26a"} Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.975612 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" event={"ID":"b03422f3-6220-40a9-b410-390213ff282e","Type":"ContainerDied","Data":"8546f85ea074aefba993cdb0bf6ad37f1ca8e108781983b99c2bd584652a33a1"} Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.975634 4842 scope.go:117] "RemoveContainer" containerID="03672f6ca0d8d8d06d6bbefa3ee0d1a92af4902782ce05f0150e6dfe78e8e26a" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.975759 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.996246 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4glck" event={"ID":"a768c72b-df6d-463e-b085-996d7b910985","Type":"ContainerStarted","Data":"a62e03cec1bb8e57732f90cf545c9f9612917cecf937c100e89f185e517fa7dd"} Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.996284 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4glck" event={"ID":"a768c72b-df6d-463e-b085-996d7b910985","Type":"ContainerStarted","Data":"3895bf2e90ce68029a65e13b1b0d09c0d18f1338f9ff1f7787b7a618bced51a5"} Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.998620 4842 generic.go:334] "Generic (PLEG): container finished" podID="11728eb4-1f90-43b9-a299-1c906e4445a2" containerID="42633a7d89c5f8c71cff3452e39e653b67b256211568367dfffae9330cfcf686" exitCode=0 Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.999031 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.999099 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" event={"ID":"11728eb4-1f90-43b9-a299-1c906e4445a2","Type":"ContainerDied","Data":"42633a7d89c5f8c71cff3452e39e653b67b256211568367dfffae9330cfcf686"} Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.999131 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" event={"ID":"11728eb4-1f90-43b9-a299-1c906e4445a2","Type":"ContainerDied","Data":"9eb7e583c84ecb63143f0d1ddff31d06b60ec73935bf9ce5848ad1097f8ea606"} Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.037059 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.051872 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-4glck" podStartSLOduration=2.0518566 podStartE2EDuration="2.0518566s" podCreationTimestamp="2026-02-02 07:04:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:04:29.049285817 +0000 UTC m=+1094.426553729" watchObservedRunningTime="2026-02-02 07:04:29.0518566 +0000 UTC m=+1094.429124512" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.093435 4842 scope.go:117] "RemoveContainer" containerID="1cda8b1bf4ec8b85bb8b44964c087214d362549894f3526896346652a3603d53" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.106834 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.128511 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-v87kh"] Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.134503 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.135635 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-v87kh"] Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.148800 4842 scope.go:117] "RemoveContainer" containerID="03672f6ca0d8d8d06d6bbefa3ee0d1a92af4902782ce05f0150e6dfe78e8e26a" Feb 02 07:04:29 crc kubenswrapper[4842]: E0202 07:04:29.163437 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03672f6ca0d8d8d06d6bbefa3ee0d1a92af4902782ce05f0150e6dfe78e8e26a\": container with ID starting with 03672f6ca0d8d8d06d6bbefa3ee0d1a92af4902782ce05f0150e6dfe78e8e26a not found: ID does not exist" containerID="03672f6ca0d8d8d06d6bbefa3ee0d1a92af4902782ce05f0150e6dfe78e8e26a" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.163487 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03672f6ca0d8d8d06d6bbefa3ee0d1a92af4902782ce05f0150e6dfe78e8e26a"} err="failed to get container status \"03672f6ca0d8d8d06d6bbefa3ee0d1a92af4902782ce05f0150e6dfe78e8e26a\": rpc error: code = NotFound desc = could not find container \"03672f6ca0d8d8d06d6bbefa3ee0d1a92af4902782ce05f0150e6dfe78e8e26a\": container with ID starting with 03672f6ca0d8d8d06d6bbefa3ee0d1a92af4902782ce05f0150e6dfe78e8e26a not found: ID does not exist" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.166254 4842 scope.go:117] "RemoveContainer" containerID="1cda8b1bf4ec8b85bb8b44964c087214d362549894f3526896346652a3603d53" Feb 02 07:04:29 crc kubenswrapper[4842]: E0202 07:04:29.166724 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cda8b1bf4ec8b85bb8b44964c087214d362549894f3526896346652a3603d53\": container with ID starting with 1cda8b1bf4ec8b85bb8b44964c087214d362549894f3526896346652a3603d53 not found: ID does not exist" containerID="1cda8b1bf4ec8b85bb8b44964c087214d362549894f3526896346652a3603d53" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.166756 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cda8b1bf4ec8b85bb8b44964c087214d362549894f3526896346652a3603d53"} err="failed to get container status \"1cda8b1bf4ec8b85bb8b44964c087214d362549894f3526896346652a3603d53\": rpc error: code = NotFound desc = could not find container \"1cda8b1bf4ec8b85bb8b44964c087214d362549894f3526896346652a3603d53\": container with ID starting with 1cda8b1bf4ec8b85bb8b44964c087214d362549894f3526896346652a3603d53 not found: ID does not exist" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.166778 4842 scope.go:117] "RemoveContainer" containerID="42633a7d89c5f8c71cff3452e39e653b67b256211568367dfffae9330cfcf686" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.184448 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.216450 4842 scope.go:117] "RemoveContainer" containerID="6a444f5c393af32e08e046b64f123d9623635f8c3e21df30a65d0ce53326ee48" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.218901 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-k5tj8"] Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.248929 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-k5tj8"] Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.248953 4842 scope.go:117] "RemoveContainer" containerID="42633a7d89c5f8c71cff3452e39e653b67b256211568367dfffae9330cfcf686" Feb 02 07:04:29 crc kubenswrapper[4842]: E0202 07:04:29.250041 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42633a7d89c5f8c71cff3452e39e653b67b256211568367dfffae9330cfcf686\": container with ID starting with 42633a7d89c5f8c71cff3452e39e653b67b256211568367dfffae9330cfcf686 not found: ID does not exist" containerID="42633a7d89c5f8c71cff3452e39e653b67b256211568367dfffae9330cfcf686" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.250075 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42633a7d89c5f8c71cff3452e39e653b67b256211568367dfffae9330cfcf686"} err="failed to get container status \"42633a7d89c5f8c71cff3452e39e653b67b256211568367dfffae9330cfcf686\": rpc error: code = NotFound desc = could not find container \"42633a7d89c5f8c71cff3452e39e653b67b256211568367dfffae9330cfcf686\": container with ID starting with 42633a7d89c5f8c71cff3452e39e653b67b256211568367dfffae9330cfcf686 not found: ID does not exist" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.250098 4842 scope.go:117] "RemoveContainer" containerID="6a444f5c393af32e08e046b64f123d9623635f8c3e21df30a65d0ce53326ee48" Feb 02 07:04:29 crc kubenswrapper[4842]: E0202 07:04:29.252422 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a444f5c393af32e08e046b64f123d9623635f8c3e21df30a65d0ce53326ee48\": container with ID starting with 6a444f5c393af32e08e046b64f123d9623635f8c3e21df30a65d0ce53326ee48 not found: ID does not exist" containerID="6a444f5c393af32e08e046b64f123d9623635f8c3e21df30a65d0ce53326ee48" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.252459 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a444f5c393af32e08e046b64f123d9623635f8c3e21df30a65d0ce53326ee48"} err="failed to get container status \"6a444f5c393af32e08e046b64f123d9623635f8c3e21df30a65d0ce53326ee48\": rpc error: code = NotFound desc = could not find container \"6a444f5c393af32e08e046b64f123d9623635f8c3e21df30a65d0ce53326ee48\": container with ID starting with 6a444f5c393af32e08e046b64f123d9623635f8c3e21df30a65d0ce53326ee48 not found: ID does not exist" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.375722 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 02 07:04:29 crc kubenswrapper[4842]: E0202 07:04:29.376280 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b03422f3-6220-40a9-b410-390213ff282e" containerName="init" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.376374 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="b03422f3-6220-40a9-b410-390213ff282e" containerName="init" Feb 02 07:04:29 crc kubenswrapper[4842]: E0202 07:04:29.376466 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11728eb4-1f90-43b9-a299-1c906e4445a2" containerName="init" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.376546 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="11728eb4-1f90-43b9-a299-1c906e4445a2" containerName="init" Feb 02 07:04:29 crc kubenswrapper[4842]: E0202 07:04:29.376641 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11728eb4-1f90-43b9-a299-1c906e4445a2" containerName="dnsmasq-dns" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.376720 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="11728eb4-1f90-43b9-a299-1c906e4445a2" containerName="dnsmasq-dns" Feb 02 07:04:29 crc kubenswrapper[4842]: E0202 07:04:29.376779 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b03422f3-6220-40a9-b410-390213ff282e" containerName="dnsmasq-dns" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.376830 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="b03422f3-6220-40a9-b410-390213ff282e" containerName="dnsmasq-dns" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.382452 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="b03422f3-6220-40a9-b410-390213ff282e" containerName="dnsmasq-dns" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.382713 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="11728eb4-1f90-43b9-a299-1c906e4445a2" containerName="dnsmasq-dns" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.383995 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.386285 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.386524 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.386650 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.386913 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-kzrkr" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.406786 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.443927 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11728eb4-1f90-43b9-a299-1c906e4445a2" path="/var/lib/kubelet/pods/11728eb4-1f90-43b9-a299-1c906e4445a2/volumes" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.444634 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b03422f3-6220-40a9-b410-390213ff282e" path="/var/lib/kubelet/pods/b03422f3-6220-40a9-b410-390213ff282e/volumes" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.523378 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qdwq\" (UniqueName: \"kubernetes.io/projected/6064786a-fa53-47a7-88ee-384cf70a86c6-kube-api-access-4qdwq\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.523439 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.523477 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6064786a-fa53-47a7-88ee-384cf70a86c6-scripts\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.523492 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6064786a-fa53-47a7-88ee-384cf70a86c6-config\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.523525 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6064786a-fa53-47a7-88ee-384cf70a86c6-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.523543 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.523592 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.624927 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6064786a-fa53-47a7-88ee-384cf70a86c6-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.624978 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.625043 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.625145 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qdwq\" (UniqueName: \"kubernetes.io/projected/6064786a-fa53-47a7-88ee-384cf70a86c6-kube-api-access-4qdwq\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.625177 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.625203 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6064786a-fa53-47a7-88ee-384cf70a86c6-scripts\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.625242 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6064786a-fa53-47a7-88ee-384cf70a86c6-config\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.625791 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6064786a-fa53-47a7-88ee-384cf70a86c6-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.626374 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6064786a-fa53-47a7-88ee-384cf70a86c6-scripts\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.626394 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6064786a-fa53-47a7-88ee-384cf70a86c6-config\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.630746 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.632236 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.634048 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.645064 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qdwq\" (UniqueName: \"kubernetes.io/projected/6064786a-fa53-47a7-88ee-384cf70a86c6-kube-api-access-4qdwq\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.704122 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.883837 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.903756 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-tttsf"] Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.952568 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-hqszm"] Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.954076 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.972110 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-hqszm"] Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.015965 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-ljcbj" event={"ID":"50ef0678-fa8e-46f0-87b3-d4cd540ca293","Type":"ContainerStarted","Data":"b9a0d2e6281bc51140d03bbdf39c9959c34f9011131e35574e7085eb36300b4c"} Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.016845 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.017916 4842 generic.go:334] "Generic (PLEG): container finished" podID="5a75411c-41b6-4e66-9c29-5dd8e5de211a" containerID="fde966c086e7db7ae0ce126efe437dd36616af251981330e64ff1cbb68eccd77" exitCode=0 Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.017957 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" event={"ID":"5a75411c-41b6-4e66-9c29-5dd8e5de211a","Type":"ContainerDied","Data":"fde966c086e7db7ae0ce126efe437dd36616af251981330e64ff1cbb68eccd77"} Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.038991 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-hqszm\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.039264 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gcnp\" (UniqueName: \"kubernetes.io/projected/f57fef97-6ad3-4b54-9859-2b33853f7f6d-kube-api-access-5gcnp\") pod \"dnsmasq-dns-6cb545bd4c-hqszm\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.039294 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-hqszm\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.039320 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-config\") pod \"dnsmasq-dns-6cb545bd4c-hqszm\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.039348 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-hqszm\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.042662 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-794868bd45-ljcbj" podStartSLOduration=3.042651366 podStartE2EDuration="3.042651366s" podCreationTimestamp="2026-02-02 07:04:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:04:30.039877657 +0000 UTC m=+1095.417145569" watchObservedRunningTime="2026-02-02 07:04:30.042651366 +0000 UTC m=+1095.419919278" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.141856 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-hqszm\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.142479 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gcnp\" (UniqueName: \"kubernetes.io/projected/f57fef97-6ad3-4b54-9859-2b33853f7f6d-kube-api-access-5gcnp\") pod \"dnsmasq-dns-6cb545bd4c-hqszm\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.142521 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-hqszm\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.142597 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-config\") pod \"dnsmasq-dns-6cb545bd4c-hqszm\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.143103 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-hqszm\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.143150 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-hqszm\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.143643 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-hqszm\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.143652 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-config\") pod \"dnsmasq-dns-6cb545bd4c-hqszm\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.143900 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-hqszm\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.161961 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gcnp\" (UniqueName: \"kubernetes.io/projected/f57fef97-6ad3-4b54-9859-2b33853f7f6d-kube-api-access-5gcnp\") pod \"dnsmasq-dns-6cb545bd4c-hqszm\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:30 crc kubenswrapper[4842]: E0202 07:04:30.255301 4842 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 02 07:04:30 crc kubenswrapper[4842]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/5a75411c-41b6-4e66-9c29-5dd8e5de211a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 02 07:04:30 crc kubenswrapper[4842]: > podSandboxID="6b3b3bd6441f4b536256f6e5decf016c5300a5522fe6fb39834290d77db0d594" Feb 02 07:04:30 crc kubenswrapper[4842]: E0202 07:04:30.255464 4842 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 02 07:04:30 crc kubenswrapper[4842]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n654h99h64ch5dbh6dh555h587h64bh5cfh647h5fdh57ch679h9h597h5f5hbch59bh54fh575h566h667h586h5f5h65ch5bch57h68h65ch58bh694h5cfq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k2k8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-757dc6fff9-tttsf_openstack(5a75411c-41b6-4e66-9c29-5dd8e5de211a): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/5a75411c-41b6-4e66-9c29-5dd8e5de211a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 02 07:04:30 crc kubenswrapper[4842]: > logger="UnhandledError" Feb 02 07:04:30 crc kubenswrapper[4842]: E0202 07:04:30.256667 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/5a75411c-41b6-4e66-9c29-5dd8e5de211a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" podUID="5a75411c-41b6-4e66-9c29-5dd8e5de211a" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.273875 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.359790 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.498491 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-hqszm"] Feb 02 07:04:30 crc kubenswrapper[4842]: W0202 07:04:30.506037 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf57fef97_6ad3_4b54_9859_2b33853f7f6d.slice/crio-7707ee54a5265cd6f331b436e56fc1213a27c7e80bff860552b4df87b7cb0473 WatchSource:0}: Error finding container 7707ee54a5265cd6f331b436e56fc1213a27c7e80bff860552b4df87b7cb0473: Status 404 returned error can't find the container with id 7707ee54a5265cd6f331b436e56fc1213a27c7e80bff860552b4df87b7cb0473 Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.024736 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.030925 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.040623 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.040683 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.040780 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-qhjpw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.040931 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.047365 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6064786a-fa53-47a7-88ee-384cf70a86c6","Type":"ContainerStarted","Data":"7d98f1543b01a1b62fffe3edf648bd287b5220b26fe6cebfee732f435b17cba6"} Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.049191 4842 generic.go:334] "Generic (PLEG): container finished" podID="f57fef97-6ad3-4b54-9859-2b33853f7f6d" containerID="95945828629b93199fdf9c3ec54c43205bcf2d7c6c586860cf34627eab21e480" exitCode=0 Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.049441 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" event={"ID":"f57fef97-6ad3-4b54-9859-2b33853f7f6d","Type":"ContainerDied","Data":"95945828629b93199fdf9c3ec54c43205bcf2d7c6c586860cf34627eab21e480"} Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.049548 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" event={"ID":"f57fef97-6ad3-4b54-9859-2b33853f7f6d","Type":"ContainerStarted","Data":"7707ee54a5265cd6f331b436e56fc1213a27c7e80bff860552b4df87b7cb0473"} Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.138430 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.276003 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.276086 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/928a8c7e-d835-4795-8197-1861e4fd8f83-cache\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.276106 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.276147 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/928a8c7e-d835-4795-8197-1861e4fd8f83-lock\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.276291 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928a8c7e-d835-4795-8197-1861e4fd8f83-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.276310 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9t87\" (UniqueName: \"kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-kube-api-access-t9t87\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.377845 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/928a8c7e-d835-4795-8197-1861e4fd8f83-lock\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.377917 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928a8c7e-d835-4795-8197-1861e4fd8f83-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.377938 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9t87\" (UniqueName: \"kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-kube-api-access-t9t87\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.377977 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.378009 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/928a8c7e-d835-4795-8197-1861e4fd8f83-cache\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.378028 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: E0202 07:04:31.378149 4842 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 07:04:31 crc kubenswrapper[4842]: E0202 07:04:31.378162 4842 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 07:04:31 crc kubenswrapper[4842]: E0202 07:04:31.378226 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift podName:928a8c7e-d835-4795-8197-1861e4fd8f83 nodeName:}" failed. No retries permitted until 2026-02-02 07:04:31.878198516 +0000 UTC m=+1097.255466418 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift") pod "swift-storage-0" (UID: "928a8c7e-d835-4795-8197-1861e4fd8f83") : configmap "swift-ring-files" not found Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.378308 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/928a8c7e-d835-4795-8197-1861e4fd8f83-lock\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.378506 4842 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.379096 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/928a8c7e-d835-4795-8197-1861e4fd8f83-cache\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.382500 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928a8c7e-d835-4795-8197-1861e4fd8f83-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.399377 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9t87\" (UniqueName: \"kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-kube-api-access-t9t87\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.426911 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.483965 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.524620 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-kbdxw"] Feb 02 07:04:31 crc kubenswrapper[4842]: E0202 07:04:31.525316 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a75411c-41b6-4e66-9c29-5dd8e5de211a" containerName="init" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.525335 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a75411c-41b6-4e66-9c29-5dd8e5de211a" containerName="init" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.525504 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a75411c-41b6-4e66-9c29-5dd8e5de211a" containerName="init" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.526086 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.528571 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.529185 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.529984 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.547645 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-kbdxw"] Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.580392 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-dns-svc\") pod \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.580446 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-ovsdbserver-sb\") pod \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.580481 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2k8s\" (UniqueName: \"kubernetes.io/projected/5a75411c-41b6-4e66-9c29-5dd8e5de211a-kube-api-access-k2k8s\") pod \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.580508 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-ovsdbserver-nb\") pod \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.580536 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-config\") pod \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.588279 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a75411c-41b6-4e66-9c29-5dd8e5de211a-kube-api-access-k2k8s" (OuterVolumeSpecName: "kube-api-access-k2k8s") pod "5a75411c-41b6-4e66-9c29-5dd8e5de211a" (UID: "5a75411c-41b6-4e66-9c29-5dd8e5de211a"). InnerVolumeSpecName "kube-api-access-k2k8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.633498 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5a75411c-41b6-4e66-9c29-5dd8e5de211a" (UID: "5a75411c-41b6-4e66-9c29-5dd8e5de211a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.634361 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5a75411c-41b6-4e66-9c29-5dd8e5de211a" (UID: "5a75411c-41b6-4e66-9c29-5dd8e5de211a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.637887 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-config" (OuterVolumeSpecName: "config") pod "5a75411c-41b6-4e66-9c29-5dd8e5de211a" (UID: "5a75411c-41b6-4e66-9c29-5dd8e5de211a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.657821 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5a75411c-41b6-4e66-9c29-5dd8e5de211a" (UID: "5a75411c-41b6-4e66-9c29-5dd8e5de211a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.682592 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-swiftconf\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.682642 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-combined-ca-bundle\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.682662 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/15fb5e79-8dd5-46ae-b8dd-6944cc810350-etc-swift\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.682837 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-dispersionconf\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.682909 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4zkn\" (UniqueName: \"kubernetes.io/projected/15fb5e79-8dd5-46ae-b8dd-6944cc810350-kube-api-access-p4zkn\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.683076 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15fb5e79-8dd5-46ae-b8dd-6944cc810350-scripts\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.683179 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/15fb5e79-8dd5-46ae-b8dd-6944cc810350-ring-data-devices\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.683299 4842 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.683316 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.683327 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2k8s\" (UniqueName: \"kubernetes.io/projected/5a75411c-41b6-4e66-9c29-5dd8e5de211a-kube-api-access-k2k8s\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.683337 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.683349 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.786312 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/15fb5e79-8dd5-46ae-b8dd-6944cc810350-ring-data-devices\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.786375 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-swiftconf\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.786436 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-combined-ca-bundle\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.786455 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/15fb5e79-8dd5-46ae-b8dd-6944cc810350-etc-swift\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.786497 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-dispersionconf\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.786520 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4zkn\" (UniqueName: \"kubernetes.io/projected/15fb5e79-8dd5-46ae-b8dd-6944cc810350-kube-api-access-p4zkn\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.786581 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15fb5e79-8dd5-46ae-b8dd-6944cc810350-scripts\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.787455 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15fb5e79-8dd5-46ae-b8dd-6944cc810350-scripts\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.787573 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/15fb5e79-8dd5-46ae-b8dd-6944cc810350-etc-swift\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.787586 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/15fb5e79-8dd5-46ae-b8dd-6944cc810350-ring-data-devices\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.790768 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-swiftconf\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.790986 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-dispersionconf\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.793614 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-combined-ca-bundle\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.802514 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4zkn\" (UniqueName: \"kubernetes.io/projected/15fb5e79-8dd5-46ae-b8dd-6944cc810350-kube-api-access-p4zkn\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.841941 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.887690 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: E0202 07:04:31.887909 4842 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 07:04:31 crc kubenswrapper[4842]: E0202 07:04:31.887942 4842 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 07:04:31 crc kubenswrapper[4842]: E0202 07:04:31.888005 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift podName:928a8c7e-d835-4795-8197-1861e4fd8f83 nodeName:}" failed. No retries permitted until 2026-02-02 07:04:32.887984954 +0000 UTC m=+1098.265252876 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift") pod "swift-storage-0" (UID: "928a8c7e-d835-4795-8197-1861e4fd8f83") : configmap "swift-ring-files" not found Feb 02 07:04:32 crc kubenswrapper[4842]: I0202 07:04:32.062760 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" event={"ID":"5a75411c-41b6-4e66-9c29-5dd8e5de211a","Type":"ContainerDied","Data":"6b3b3bd6441f4b536256f6e5decf016c5300a5522fe6fb39834290d77db0d594"} Feb 02 07:04:32 crc kubenswrapper[4842]: I0202 07:04:32.062958 4842 scope.go:117] "RemoveContainer" containerID="fde966c086e7db7ae0ce126efe437dd36616af251981330e64ff1cbb68eccd77" Feb 02 07:04:32 crc kubenswrapper[4842]: I0202 07:04:32.063063 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:32 crc kubenswrapper[4842]: I0202 07:04:32.078294 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" event={"ID":"f57fef97-6ad3-4b54-9859-2b33853f7f6d","Type":"ContainerStarted","Data":"f0a94a75b63c1a8041b919515cc44d86376bbe513e93d1848bcd51190a1482d3"} Feb 02 07:04:32 crc kubenswrapper[4842]: I0202 07:04:32.078341 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:32 crc kubenswrapper[4842]: I0202 07:04:32.123922 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" podStartSLOduration=3.123900925 podStartE2EDuration="3.123900925s" podCreationTimestamp="2026-02-02 07:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:04:32.100476308 +0000 UTC m=+1097.477744220" watchObservedRunningTime="2026-02-02 07:04:32.123900925 +0000 UTC m=+1097.501168837" Feb 02 07:04:32 crc kubenswrapper[4842]: I0202 07:04:32.196331 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-tttsf"] Feb 02 07:04:32 crc kubenswrapper[4842]: I0202 07:04:32.242714 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-tttsf"] Feb 02 07:04:32 crc kubenswrapper[4842]: I0202 07:04:32.411477 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-kbdxw"] Feb 02 07:04:32 crc kubenswrapper[4842]: W0202 07:04:32.418560 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15fb5e79_8dd5_46ae_b8dd_6944cc810350.slice/crio-1aa25f7ce59beabc543eaca2151f7fe5af27722fc7175abe6c90cab123aefade WatchSource:0}: Error finding container 1aa25f7ce59beabc543eaca2151f7fe5af27722fc7175abe6c90cab123aefade: Status 404 returned error can't find the container with id 1aa25f7ce59beabc543eaca2151f7fe5af27722fc7175abe6c90cab123aefade Feb 02 07:04:32 crc kubenswrapper[4842]: I0202 07:04:32.900771 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:32 crc kubenswrapper[4842]: E0202 07:04:32.901339 4842 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 07:04:32 crc kubenswrapper[4842]: E0202 07:04:32.901362 4842 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 07:04:32 crc kubenswrapper[4842]: E0202 07:04:32.901410 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift podName:928a8c7e-d835-4795-8197-1861e4fd8f83 nodeName:}" failed. No retries permitted until 2026-02-02 07:04:34.901393398 +0000 UTC m=+1100.278661310 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift") pod "swift-storage-0" (UID: "928a8c7e-d835-4795-8197-1861e4fd8f83") : configmap "swift-ring-files" not found Feb 02 07:04:33 crc kubenswrapper[4842]: I0202 07:04:33.114409 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6064786a-fa53-47a7-88ee-384cf70a86c6","Type":"ContainerStarted","Data":"e96862cf77fa128f12f3b9982dfad78848395bebaf2c0c3ff7a1cca181e725f0"} Feb 02 07:04:33 crc kubenswrapper[4842]: I0202 07:04:33.114460 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6064786a-fa53-47a7-88ee-384cf70a86c6","Type":"ContainerStarted","Data":"6b0de6a9b1a36bc3d2910cbd8bed0ec4d6b0a971b7c05c08ccf5a0c3fa8afa6c"} Feb 02 07:04:33 crc kubenswrapper[4842]: I0202 07:04:33.114670 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 02 07:04:33 crc kubenswrapper[4842]: I0202 07:04:33.119005 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-kbdxw" event={"ID":"15fb5e79-8dd5-46ae-b8dd-6944cc810350","Type":"ContainerStarted","Data":"1aa25f7ce59beabc543eaca2151f7fe5af27722fc7175abe6c90cab123aefade"} Feb 02 07:04:33 crc kubenswrapper[4842]: I0202 07:04:33.146399 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.621641871 podStartE2EDuration="4.146376162s" podCreationTimestamp="2026-02-02 07:04:29 +0000 UTC" firstStartedPulling="2026-02-02 07:04:30.372312536 +0000 UTC m=+1095.749580448" lastFinishedPulling="2026-02-02 07:04:31.897046827 +0000 UTC m=+1097.274314739" observedRunningTime="2026-02-02 07:04:33.135860733 +0000 UTC m=+1098.513128645" watchObservedRunningTime="2026-02-02 07:04:33.146376162 +0000 UTC m=+1098.523644074" Feb 02 07:04:33 crc kubenswrapper[4842]: I0202 07:04:33.442572 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a75411c-41b6-4e66-9c29-5dd8e5de211a" path="/var/lib/kubelet/pods/5a75411c-41b6-4e66-9c29-5dd8e5de211a/volumes" Feb 02 07:04:34 crc kubenswrapper[4842]: I0202 07:04:34.948885 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:34 crc kubenswrapper[4842]: E0202 07:04:34.949064 4842 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 07:04:34 crc kubenswrapper[4842]: E0202 07:04:34.949335 4842 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 07:04:34 crc kubenswrapper[4842]: E0202 07:04:34.949392 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift podName:928a8c7e-d835-4795-8197-1861e4fd8f83 nodeName:}" failed. No retries permitted until 2026-02-02 07:04:38.949374026 +0000 UTC m=+1104.326641938 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift") pod "swift-storage-0" (UID: "928a8c7e-d835-4795-8197-1861e4fd8f83") : configmap "swift-ring-files" not found Feb 02 07:04:34 crc kubenswrapper[4842]: I0202 07:04:34.962775 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-qm2z9"] Feb 02 07:04:34 crc kubenswrapper[4842]: I0202 07:04:34.974710 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qm2z9" Feb 02 07:04:34 crc kubenswrapper[4842]: I0202 07:04:34.978725 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 02 07:04:34 crc kubenswrapper[4842]: I0202 07:04:34.979505 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qm2z9"] Feb 02 07:04:35 crc kubenswrapper[4842]: I0202 07:04:35.052077 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19378e36-9154-451c-88fe-dab4522aa0dc-operator-scripts\") pod \"root-account-create-update-qm2z9\" (UID: \"19378e36-9154-451c-88fe-dab4522aa0dc\") " pod="openstack/root-account-create-update-qm2z9" Feb 02 07:04:35 crc kubenswrapper[4842]: I0202 07:04:35.052321 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5wsp\" (UniqueName: \"kubernetes.io/projected/19378e36-9154-451c-88fe-dab4522aa0dc-kube-api-access-f5wsp\") pod \"root-account-create-update-qm2z9\" (UID: \"19378e36-9154-451c-88fe-dab4522aa0dc\") " pod="openstack/root-account-create-update-qm2z9" Feb 02 07:04:35 crc kubenswrapper[4842]: I0202 07:04:35.155110 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19378e36-9154-451c-88fe-dab4522aa0dc-operator-scripts\") pod \"root-account-create-update-qm2z9\" (UID: \"19378e36-9154-451c-88fe-dab4522aa0dc\") " pod="openstack/root-account-create-update-qm2z9" Feb 02 07:04:35 crc kubenswrapper[4842]: I0202 07:04:35.155185 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5wsp\" (UniqueName: \"kubernetes.io/projected/19378e36-9154-451c-88fe-dab4522aa0dc-kube-api-access-f5wsp\") pod \"root-account-create-update-qm2z9\" (UID: \"19378e36-9154-451c-88fe-dab4522aa0dc\") " pod="openstack/root-account-create-update-qm2z9" Feb 02 07:04:35 crc kubenswrapper[4842]: I0202 07:04:35.156258 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19378e36-9154-451c-88fe-dab4522aa0dc-operator-scripts\") pod \"root-account-create-update-qm2z9\" (UID: \"19378e36-9154-451c-88fe-dab4522aa0dc\") " pod="openstack/root-account-create-update-qm2z9" Feb 02 07:04:35 crc kubenswrapper[4842]: I0202 07:04:35.177100 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5wsp\" (UniqueName: \"kubernetes.io/projected/19378e36-9154-451c-88fe-dab4522aa0dc-kube-api-access-f5wsp\") pod \"root-account-create-update-qm2z9\" (UID: \"19378e36-9154-451c-88fe-dab4522aa0dc\") " pod="openstack/root-account-create-update-qm2z9" Feb 02 07:04:35 crc kubenswrapper[4842]: I0202 07:04:35.299935 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qm2z9" Feb 02 07:04:36 crc kubenswrapper[4842]: I0202 07:04:36.300022 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qm2z9"] Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.159250 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-kbdxw" event={"ID":"15fb5e79-8dd5-46ae-b8dd-6944cc810350","Type":"ContainerStarted","Data":"be09858b0b26720a1b1eb72e60d3de0b3dbd4ce4a7e6fc548a4d5f3d171165c8"} Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.160603 4842 generic.go:334] "Generic (PLEG): container finished" podID="19378e36-9154-451c-88fe-dab4522aa0dc" containerID="fd930d739c77e2c60500ea7cab9f16a6ba8a914130efb858b41ff112a5549c6c" exitCode=0 Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.160654 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qm2z9" event={"ID":"19378e36-9154-451c-88fe-dab4522aa0dc","Type":"ContainerDied","Data":"fd930d739c77e2c60500ea7cab9f16a6ba8a914130efb858b41ff112a5549c6c"} Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.160801 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qm2z9" event={"ID":"19378e36-9154-451c-88fe-dab4522aa0dc","Type":"ContainerStarted","Data":"289357a68298a49918f4a3d7e9df807fcf5158b46465e992e8a6e7dcb82706d2"} Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.177500 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-kbdxw" podStartSLOduration=2.683563434 podStartE2EDuration="6.177482851s" podCreationTimestamp="2026-02-02 07:04:31 +0000 UTC" firstStartedPulling="2026-02-02 07:04:32.422384948 +0000 UTC m=+1097.799652860" lastFinishedPulling="2026-02-02 07:04:35.916304365 +0000 UTC m=+1101.293572277" observedRunningTime="2026-02-02 07:04:37.173277537 +0000 UTC m=+1102.550545459" watchObservedRunningTime="2026-02-02 07:04:37.177482851 +0000 UTC m=+1102.554750763" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.600760 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-6ctcq"] Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.601798 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6ctcq" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.610964 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6ctcq"] Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.703259 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4450e400-557b-4092-8f73-124910137dc4-operator-scripts\") pod \"keystone-db-create-6ctcq\" (UID: \"4450e400-557b-4092-8f73-124910137dc4\") " pod="openstack/keystone-db-create-6ctcq" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.703578 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwnf7\" (UniqueName: \"kubernetes.io/projected/4450e400-557b-4092-8f73-124910137dc4-kube-api-access-dwnf7\") pod \"keystone-db-create-6ctcq\" (UID: \"4450e400-557b-4092-8f73-124910137dc4\") " pod="openstack/keystone-db-create-6ctcq" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.713671 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-0ec7-account-create-update-x5rkz"] Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.714578 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0ec7-account-create-update-x5rkz" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.717239 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.730470 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0ec7-account-create-update-x5rkz"] Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.805602 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4450e400-557b-4092-8f73-124910137dc4-operator-scripts\") pod \"keystone-db-create-6ctcq\" (UID: \"4450e400-557b-4092-8f73-124910137dc4\") " pod="openstack/keystone-db-create-6ctcq" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.805671 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcbc4\" (UniqueName: \"kubernetes.io/projected/6601a68f-34a5-4629-ac74-97cb14e809f3-kube-api-access-kcbc4\") pod \"keystone-0ec7-account-create-update-x5rkz\" (UID: \"6601a68f-34a5-4629-ac74-97cb14e809f3\") " pod="openstack/keystone-0ec7-account-create-update-x5rkz" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.805722 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwnf7\" (UniqueName: \"kubernetes.io/projected/4450e400-557b-4092-8f73-124910137dc4-kube-api-access-dwnf7\") pod \"keystone-db-create-6ctcq\" (UID: \"4450e400-557b-4092-8f73-124910137dc4\") " pod="openstack/keystone-db-create-6ctcq" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.805807 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6601a68f-34a5-4629-ac74-97cb14e809f3-operator-scripts\") pod \"keystone-0ec7-account-create-update-x5rkz\" (UID: \"6601a68f-34a5-4629-ac74-97cb14e809f3\") " pod="openstack/keystone-0ec7-account-create-update-x5rkz" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.806572 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4450e400-557b-4092-8f73-124910137dc4-operator-scripts\") pod \"keystone-db-create-6ctcq\" (UID: \"4450e400-557b-4092-8f73-124910137dc4\") " pod="openstack/keystone-db-create-6ctcq" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.826268 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwnf7\" (UniqueName: \"kubernetes.io/projected/4450e400-557b-4092-8f73-124910137dc4-kube-api-access-dwnf7\") pod \"keystone-db-create-6ctcq\" (UID: \"4450e400-557b-4092-8f73-124910137dc4\") " pod="openstack/keystone-db-create-6ctcq" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.907662 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcbc4\" (UniqueName: \"kubernetes.io/projected/6601a68f-34a5-4629-ac74-97cb14e809f3-kube-api-access-kcbc4\") pod \"keystone-0ec7-account-create-update-x5rkz\" (UID: \"6601a68f-34a5-4629-ac74-97cb14e809f3\") " pod="openstack/keystone-0ec7-account-create-update-x5rkz" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.907799 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6601a68f-34a5-4629-ac74-97cb14e809f3-operator-scripts\") pod \"keystone-0ec7-account-create-update-x5rkz\" (UID: \"6601a68f-34a5-4629-ac74-97cb14e809f3\") " pod="openstack/keystone-0ec7-account-create-update-x5rkz" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.908525 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6601a68f-34a5-4629-ac74-97cb14e809f3-operator-scripts\") pod \"keystone-0ec7-account-create-update-x5rkz\" (UID: \"6601a68f-34a5-4629-ac74-97cb14e809f3\") " pod="openstack/keystone-0ec7-account-create-update-x5rkz" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.915010 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6ctcq" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.920571 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-p28sd"] Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.921798 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-p28sd" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.926868 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcbc4\" (UniqueName: \"kubernetes.io/projected/6601a68f-34a5-4629-ac74-97cb14e809f3-kube-api-access-kcbc4\") pod \"keystone-0ec7-account-create-update-x5rkz\" (UID: \"6601a68f-34a5-4629-ac74-97cb14e809f3\") " pod="openstack/keystone-0ec7-account-create-update-x5rkz" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.942333 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-85ce-account-create-update-rxmcp"] Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.943714 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85ce-account-create-update-rxmcp" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.946791 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.952716 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-p28sd"] Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.974877 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85ce-account-create-update-rxmcp"] Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.999187 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.009428 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7svj\" (UniqueName: \"kubernetes.io/projected/31bf41ed-98c7-44ed-abba-93b74a546e71-kube-api-access-t7svj\") pod \"placement-db-create-p28sd\" (UID: \"31bf41ed-98c7-44ed-abba-93b74a546e71\") " pod="openstack/placement-db-create-p28sd" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.009518 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f4b2578-8a31-4097-afd3-04bae6621094-operator-scripts\") pod \"placement-85ce-account-create-update-rxmcp\" (UID: \"3f4b2578-8a31-4097-afd3-04bae6621094\") " pod="openstack/placement-85ce-account-create-update-rxmcp" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.009594 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tbjg\" (UniqueName: \"kubernetes.io/projected/3f4b2578-8a31-4097-afd3-04bae6621094-kube-api-access-4tbjg\") pod \"placement-85ce-account-create-update-rxmcp\" (UID: \"3f4b2578-8a31-4097-afd3-04bae6621094\") " pod="openstack/placement-85ce-account-create-update-rxmcp" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.009833 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31bf41ed-98c7-44ed-abba-93b74a546e71-operator-scripts\") pod \"placement-db-create-p28sd\" (UID: \"31bf41ed-98c7-44ed-abba-93b74a546e71\") " pod="openstack/placement-db-create-p28sd" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.103253 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0ec7-account-create-update-x5rkz" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.111417 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f4b2578-8a31-4097-afd3-04bae6621094-operator-scripts\") pod \"placement-85ce-account-create-update-rxmcp\" (UID: \"3f4b2578-8a31-4097-afd3-04bae6621094\") " pod="openstack/placement-85ce-account-create-update-rxmcp" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.111466 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tbjg\" (UniqueName: \"kubernetes.io/projected/3f4b2578-8a31-4097-afd3-04bae6621094-kube-api-access-4tbjg\") pod \"placement-85ce-account-create-update-rxmcp\" (UID: \"3f4b2578-8a31-4097-afd3-04bae6621094\") " pod="openstack/placement-85ce-account-create-update-rxmcp" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.111536 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31bf41ed-98c7-44ed-abba-93b74a546e71-operator-scripts\") pod \"placement-db-create-p28sd\" (UID: \"31bf41ed-98c7-44ed-abba-93b74a546e71\") " pod="openstack/placement-db-create-p28sd" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.111594 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7svj\" (UniqueName: \"kubernetes.io/projected/31bf41ed-98c7-44ed-abba-93b74a546e71-kube-api-access-t7svj\") pod \"placement-db-create-p28sd\" (UID: \"31bf41ed-98c7-44ed-abba-93b74a546e71\") " pod="openstack/placement-db-create-p28sd" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.112361 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f4b2578-8a31-4097-afd3-04bae6621094-operator-scripts\") pod \"placement-85ce-account-create-update-rxmcp\" (UID: \"3f4b2578-8a31-4097-afd3-04bae6621094\") " pod="openstack/placement-85ce-account-create-update-rxmcp" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.113337 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31bf41ed-98c7-44ed-abba-93b74a546e71-operator-scripts\") pod \"placement-db-create-p28sd\" (UID: \"31bf41ed-98c7-44ed-abba-93b74a546e71\") " pod="openstack/placement-db-create-p28sd" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.130427 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7svj\" (UniqueName: \"kubernetes.io/projected/31bf41ed-98c7-44ed-abba-93b74a546e71-kube-api-access-t7svj\") pod \"placement-db-create-p28sd\" (UID: \"31bf41ed-98c7-44ed-abba-93b74a546e71\") " pod="openstack/placement-db-create-p28sd" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.132324 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tbjg\" (UniqueName: \"kubernetes.io/projected/3f4b2578-8a31-4097-afd3-04bae6621094-kube-api-access-4tbjg\") pod \"placement-85ce-account-create-update-rxmcp\" (UID: \"3f4b2578-8a31-4097-afd3-04bae6621094\") " pod="openstack/placement-85ce-account-create-update-rxmcp" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.290878 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-vsjtz"] Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.291832 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vsjtz" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.305718 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-vsjtz"] Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.315895 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf6c9856-8e0e-462e-a2bb-b21847078b54-operator-scripts\") pod \"glance-db-create-vsjtz\" (UID: \"cf6c9856-8e0e-462e-a2bb-b21847078b54\") " pod="openstack/glance-db-create-vsjtz" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.315999 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xns4j\" (UniqueName: \"kubernetes.io/projected/cf6c9856-8e0e-462e-a2bb-b21847078b54-kube-api-access-xns4j\") pod \"glance-db-create-vsjtz\" (UID: \"cf6c9856-8e0e-462e-a2bb-b21847078b54\") " pod="openstack/glance-db-create-vsjtz" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.319510 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-p28sd" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.340427 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85ce-account-create-update-rxmcp" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.383708 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6ctcq"] Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.396532 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-2348-account-create-update-l9hwl"] Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.397784 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2348-account-create-update-l9hwl" Feb 02 07:04:38 crc kubenswrapper[4842]: W0202 07:04:38.399281 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4450e400_557b_4092_8f73_124910137dc4.slice/crio-d8feadef768195e707bb4429851d853709421e83367cc73a361512bc437b5450 WatchSource:0}: Error finding container d8feadef768195e707bb4429851d853709421e83367cc73a361512bc437b5450: Status 404 returned error can't find the container with id d8feadef768195e707bb4429851d853709421e83367cc73a361512bc437b5450 Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.402723 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.420734 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xns4j\" (UniqueName: \"kubernetes.io/projected/cf6c9856-8e0e-462e-a2bb-b21847078b54-kube-api-access-xns4j\") pod \"glance-db-create-vsjtz\" (UID: \"cf6c9856-8e0e-462e-a2bb-b21847078b54\") " pod="openstack/glance-db-create-vsjtz" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.420818 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxhpl\" (UniqueName: \"kubernetes.io/projected/ef83800c-79dc-4cfa-9f7c-194a44995d12-kube-api-access-hxhpl\") pod \"glance-2348-account-create-update-l9hwl\" (UID: \"ef83800c-79dc-4cfa-9f7c-194a44995d12\") " pod="openstack/glance-2348-account-create-update-l9hwl" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.421025 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef83800c-79dc-4cfa-9f7c-194a44995d12-operator-scripts\") pod \"glance-2348-account-create-update-l9hwl\" (UID: \"ef83800c-79dc-4cfa-9f7c-194a44995d12\") " pod="openstack/glance-2348-account-create-update-l9hwl" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.421070 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf6c9856-8e0e-462e-a2bb-b21847078b54-operator-scripts\") pod \"glance-db-create-vsjtz\" (UID: \"cf6c9856-8e0e-462e-a2bb-b21847078b54\") " pod="openstack/glance-db-create-vsjtz" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.423698 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf6c9856-8e0e-462e-a2bb-b21847078b54-operator-scripts\") pod \"glance-db-create-vsjtz\" (UID: \"cf6c9856-8e0e-462e-a2bb-b21847078b54\") " pod="openstack/glance-db-create-vsjtz" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.442575 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xns4j\" (UniqueName: \"kubernetes.io/projected/cf6c9856-8e0e-462e-a2bb-b21847078b54-kube-api-access-xns4j\") pod \"glance-db-create-vsjtz\" (UID: \"cf6c9856-8e0e-462e-a2bb-b21847078b54\") " pod="openstack/glance-db-create-vsjtz" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.472555 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2348-account-create-update-l9hwl"] Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.522833 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef83800c-79dc-4cfa-9f7c-194a44995d12-operator-scripts\") pod \"glance-2348-account-create-update-l9hwl\" (UID: \"ef83800c-79dc-4cfa-9f7c-194a44995d12\") " pod="openstack/glance-2348-account-create-update-l9hwl" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.523359 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxhpl\" (UniqueName: \"kubernetes.io/projected/ef83800c-79dc-4cfa-9f7c-194a44995d12-kube-api-access-hxhpl\") pod \"glance-2348-account-create-update-l9hwl\" (UID: \"ef83800c-79dc-4cfa-9f7c-194a44995d12\") " pod="openstack/glance-2348-account-create-update-l9hwl" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.526734 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef83800c-79dc-4cfa-9f7c-194a44995d12-operator-scripts\") pod \"glance-2348-account-create-update-l9hwl\" (UID: \"ef83800c-79dc-4cfa-9f7c-194a44995d12\") " pod="openstack/glance-2348-account-create-update-l9hwl" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.548691 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxhpl\" (UniqueName: \"kubernetes.io/projected/ef83800c-79dc-4cfa-9f7c-194a44995d12-kube-api-access-hxhpl\") pod \"glance-2348-account-create-update-l9hwl\" (UID: \"ef83800c-79dc-4cfa-9f7c-194a44995d12\") " pod="openstack/glance-2348-account-create-update-l9hwl" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.592360 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qm2z9" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.605429 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0ec7-account-create-update-x5rkz"] Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.624665 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19378e36-9154-451c-88fe-dab4522aa0dc-operator-scripts\") pod \"19378e36-9154-451c-88fe-dab4522aa0dc\" (UID: \"19378e36-9154-451c-88fe-dab4522aa0dc\") " Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.624761 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5wsp\" (UniqueName: \"kubernetes.io/projected/19378e36-9154-451c-88fe-dab4522aa0dc-kube-api-access-f5wsp\") pod \"19378e36-9154-451c-88fe-dab4522aa0dc\" (UID: \"19378e36-9154-451c-88fe-dab4522aa0dc\") " Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.625321 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19378e36-9154-451c-88fe-dab4522aa0dc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "19378e36-9154-451c-88fe-dab4522aa0dc" (UID: "19378e36-9154-451c-88fe-dab4522aa0dc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.625639 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vsjtz" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.629613 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19378e36-9154-451c-88fe-dab4522aa0dc-kube-api-access-f5wsp" (OuterVolumeSpecName: "kube-api-access-f5wsp") pod "19378e36-9154-451c-88fe-dab4522aa0dc" (UID: "19378e36-9154-451c-88fe-dab4522aa0dc"). InnerVolumeSpecName "kube-api-access-f5wsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.727104 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5wsp\" (UniqueName: \"kubernetes.io/projected/19378e36-9154-451c-88fe-dab4522aa0dc-kube-api-access-f5wsp\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.727402 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19378e36-9154-451c-88fe-dab4522aa0dc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.740016 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2348-account-create-update-l9hwl" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.805053 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-p28sd"] Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.917584 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-vsjtz"] Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.930857 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85ce-account-create-update-rxmcp"] Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.032783 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:39 crc kubenswrapper[4842]: E0202 07:04:39.032956 4842 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 07:04:39 crc kubenswrapper[4842]: E0202 07:04:39.032971 4842 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 07:04:39 crc kubenswrapper[4842]: E0202 07:04:39.033018 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift podName:928a8c7e-d835-4795-8197-1861e4fd8f83 nodeName:}" failed. No retries permitted until 2026-02-02 07:04:47.033004758 +0000 UTC m=+1112.410272670 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift") pod "swift-storage-0" (UID: "928a8c7e-d835-4795-8197-1861e4fd8f83") : configmap "swift-ring-files" not found Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.176471 4842 generic.go:334] "Generic (PLEG): container finished" podID="6601a68f-34a5-4629-ac74-97cb14e809f3" containerID="af9aab2a24cfc4f124984122e483edf359b136da9788f63d0af01da2b636aa44" exitCode=0 Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.176541 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0ec7-account-create-update-x5rkz" event={"ID":"6601a68f-34a5-4629-ac74-97cb14e809f3","Type":"ContainerDied","Data":"af9aab2a24cfc4f124984122e483edf359b136da9788f63d0af01da2b636aa44"} Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.176778 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0ec7-account-create-update-x5rkz" event={"ID":"6601a68f-34a5-4629-ac74-97cb14e809f3","Type":"ContainerStarted","Data":"1bfda80e82935159993cfdb80d57362500543b4c3a630820faa6ff4dbddd1689"} Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.178526 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qm2z9" event={"ID":"19378e36-9154-451c-88fe-dab4522aa0dc","Type":"ContainerDied","Data":"289357a68298a49918f4a3d7e9df807fcf5158b46465e992e8a6e7dcb82706d2"} Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.178579 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="289357a68298a49918f4a3d7e9df807fcf5158b46465e992e8a6e7dcb82706d2" Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.178583 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qm2z9" Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.180062 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vsjtz" event={"ID":"cf6c9856-8e0e-462e-a2bb-b21847078b54","Type":"ContainerStarted","Data":"8450cdf340185e60d5f4db9ea47d0c0bf9eae39c09e5f2b6a32cf93eac9395f1"} Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.180102 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vsjtz" event={"ID":"cf6c9856-8e0e-462e-a2bb-b21847078b54","Type":"ContainerStarted","Data":"5a286490efae1b2fcfd3289842091a1573875773e0e26817daf7cfeecd21545c"} Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.181181 4842 generic.go:334] "Generic (PLEG): container finished" podID="4450e400-557b-4092-8f73-124910137dc4" containerID="1f6dfdf20fb08a168081a064432d989dfc5b7013b8511778f8a6195c000accc0" exitCode=0 Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.181249 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6ctcq" event={"ID":"4450e400-557b-4092-8f73-124910137dc4","Type":"ContainerDied","Data":"1f6dfdf20fb08a168081a064432d989dfc5b7013b8511778f8a6195c000accc0"} Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.181268 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6ctcq" event={"ID":"4450e400-557b-4092-8f73-124910137dc4","Type":"ContainerStarted","Data":"d8feadef768195e707bb4429851d853709421e83367cc73a361512bc437b5450"} Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.182672 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85ce-account-create-update-rxmcp" event={"ID":"3f4b2578-8a31-4097-afd3-04bae6621094","Type":"ContainerStarted","Data":"d406c8dd7aa9d060cb8c2e933af0916fc03ef6a4df86a58d035643deda1d435e"} Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.182708 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85ce-account-create-update-rxmcp" event={"ID":"3f4b2578-8a31-4097-afd3-04bae6621094","Type":"ContainerStarted","Data":"15cb3839393a80afe35c025ac6d4f112e276e4e995c843796ae616facfee62f2"} Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.184655 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-p28sd" event={"ID":"31bf41ed-98c7-44ed-abba-93b74a546e71","Type":"ContainerStarted","Data":"d8fe329dd4b6d5e2f6afa45efa10d42b7ad946aa8ec1ea8a45b86570356f4bd0"} Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.184686 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-p28sd" event={"ID":"31bf41ed-98c7-44ed-abba-93b74a546e71","Type":"ContainerStarted","Data":"b54b449d9636044ec4aa3fc42dc49895933f5c104686edd5988476072faf577b"} Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.229853 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-p28sd" podStartSLOduration=2.229829287 podStartE2EDuration="2.229829287s" podCreationTimestamp="2026-02-02 07:04:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:04:39.208989234 +0000 UTC m=+1104.586257156" watchObservedRunningTime="2026-02-02 07:04:39.229829287 +0000 UTC m=+1104.607097199" Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.239006 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-85ce-account-create-update-rxmcp" podStartSLOduration=2.238988703 podStartE2EDuration="2.238988703s" podCreationTimestamp="2026-02-02 07:04:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:04:39.231819116 +0000 UTC m=+1104.609087028" watchObservedRunningTime="2026-02-02 07:04:39.238988703 +0000 UTC m=+1104.616256605" Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.266566 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-vsjtz" podStartSLOduration=1.266543201 podStartE2EDuration="1.266543201s" podCreationTimestamp="2026-02-02 07:04:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:04:39.261266971 +0000 UTC m=+1104.638534883" watchObservedRunningTime="2026-02-02 07:04:39.266543201 +0000 UTC m=+1104.643811113" Feb 02 07:04:39 crc kubenswrapper[4842]: W0202 07:04:39.274308 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef83800c_79dc_4cfa_9f7c_194a44995d12.slice/crio-45bfcdc7da5be52f168e943bba23476495a7050157d4308d66afb8530a3e96bd WatchSource:0}: Error finding container 45bfcdc7da5be52f168e943bba23476495a7050157d4308d66afb8530a3e96bd: Status 404 returned error can't find the container with id 45bfcdc7da5be52f168e943bba23476495a7050157d4308d66afb8530a3e96bd Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.279525 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2348-account-create-update-l9hwl"] Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.212416 4842 generic.go:334] "Generic (PLEG): container finished" podID="cf6c9856-8e0e-462e-a2bb-b21847078b54" containerID="8450cdf340185e60d5f4db9ea47d0c0bf9eae39c09e5f2b6a32cf93eac9395f1" exitCode=0 Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.212517 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vsjtz" event={"ID":"cf6c9856-8e0e-462e-a2bb-b21847078b54","Type":"ContainerDied","Data":"8450cdf340185e60d5f4db9ea47d0c0bf9eae39c09e5f2b6a32cf93eac9395f1"} Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.216578 4842 generic.go:334] "Generic (PLEG): container finished" podID="ef83800c-79dc-4cfa-9f7c-194a44995d12" containerID="5a4746c338d6ea60edc25a0f516095639bc028a5f96d859500d9f30d568afd7f" exitCode=0 Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.216682 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2348-account-create-update-l9hwl" event={"ID":"ef83800c-79dc-4cfa-9f7c-194a44995d12","Type":"ContainerDied","Data":"5a4746c338d6ea60edc25a0f516095639bc028a5f96d859500d9f30d568afd7f"} Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.216717 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2348-account-create-update-l9hwl" event={"ID":"ef83800c-79dc-4cfa-9f7c-194a44995d12","Type":"ContainerStarted","Data":"45bfcdc7da5be52f168e943bba23476495a7050157d4308d66afb8530a3e96bd"} Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.231891 4842 generic.go:334] "Generic (PLEG): container finished" podID="3f4b2578-8a31-4097-afd3-04bae6621094" containerID="d406c8dd7aa9d060cb8c2e933af0916fc03ef6a4df86a58d035643deda1d435e" exitCode=0 Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.232044 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85ce-account-create-update-rxmcp" event={"ID":"3f4b2578-8a31-4097-afd3-04bae6621094","Type":"ContainerDied","Data":"d406c8dd7aa9d060cb8c2e933af0916fc03ef6a4df86a58d035643deda1d435e"} Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.247757 4842 generic.go:334] "Generic (PLEG): container finished" podID="31bf41ed-98c7-44ed-abba-93b74a546e71" containerID="d8fe329dd4b6d5e2f6afa45efa10d42b7ad946aa8ec1ea8a45b86570356f4bd0" exitCode=0 Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.248049 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-p28sd" event={"ID":"31bf41ed-98c7-44ed-abba-93b74a546e71","Type":"ContainerDied","Data":"d8fe329dd4b6d5e2f6afa45efa10d42b7ad946aa8ec1ea8a45b86570356f4bd0"} Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.275415 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.408929 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-ljcbj"] Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.409142 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-794868bd45-ljcbj" podUID="50ef0678-fa8e-46f0-87b3-d4cd540ca293" containerName="dnsmasq-dns" containerID="cri-o://b9a0d2e6281bc51140d03bbdf39c9959c34f9011131e35574e7085eb36300b4c" gracePeriod=10 Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.723699 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0ec7-account-create-update-x5rkz" Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.782829 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6601a68f-34a5-4629-ac74-97cb14e809f3-operator-scripts\") pod \"6601a68f-34a5-4629-ac74-97cb14e809f3\" (UID: \"6601a68f-34a5-4629-ac74-97cb14e809f3\") " Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.782874 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcbc4\" (UniqueName: \"kubernetes.io/projected/6601a68f-34a5-4629-ac74-97cb14e809f3-kube-api-access-kcbc4\") pod \"6601a68f-34a5-4629-ac74-97cb14e809f3\" (UID: \"6601a68f-34a5-4629-ac74-97cb14e809f3\") " Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.784619 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6601a68f-34a5-4629-ac74-97cb14e809f3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6601a68f-34a5-4629-ac74-97cb14e809f3" (UID: "6601a68f-34a5-4629-ac74-97cb14e809f3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.794367 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6601a68f-34a5-4629-ac74-97cb14e809f3-kube-api-access-kcbc4" (OuterVolumeSpecName: "kube-api-access-kcbc4") pod "6601a68f-34a5-4629-ac74-97cb14e809f3" (UID: "6601a68f-34a5-4629-ac74-97cb14e809f3"). InnerVolumeSpecName "kube-api-access-kcbc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.806518 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6ctcq" Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.884691 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwnf7\" (UniqueName: \"kubernetes.io/projected/4450e400-557b-4092-8f73-124910137dc4-kube-api-access-dwnf7\") pod \"4450e400-557b-4092-8f73-124910137dc4\" (UID: \"4450e400-557b-4092-8f73-124910137dc4\") " Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.884857 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4450e400-557b-4092-8f73-124910137dc4-operator-scripts\") pod \"4450e400-557b-4092-8f73-124910137dc4\" (UID: \"4450e400-557b-4092-8f73-124910137dc4\") " Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.885203 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6601a68f-34a5-4629-ac74-97cb14e809f3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.885233 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcbc4\" (UniqueName: \"kubernetes.io/projected/6601a68f-34a5-4629-ac74-97cb14e809f3-kube-api-access-kcbc4\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.888007 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4450e400-557b-4092-8f73-124910137dc4-kube-api-access-dwnf7" (OuterVolumeSpecName: "kube-api-access-dwnf7") pod "4450e400-557b-4092-8f73-124910137dc4" (UID: "4450e400-557b-4092-8f73-124910137dc4"). InnerVolumeSpecName "kube-api-access-dwnf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.889932 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4450e400-557b-4092-8f73-124910137dc4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4450e400-557b-4092-8f73-124910137dc4" (UID: "4450e400-557b-4092-8f73-124910137dc4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.986559 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwnf7\" (UniqueName: \"kubernetes.io/projected/4450e400-557b-4092-8f73-124910137dc4-kube-api-access-dwnf7\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.986862 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4450e400-557b-4092-8f73-124910137dc4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.052785 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.087750 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-config\") pod \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.087789 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-ovsdbserver-sb\") pod \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.087880 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6hzc\" (UniqueName: \"kubernetes.io/projected/50ef0678-fa8e-46f0-87b3-d4cd540ca293-kube-api-access-w6hzc\") pod \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.087977 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-dns-svc\") pod \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.091395 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ef0678-fa8e-46f0-87b3-d4cd540ca293-kube-api-access-w6hzc" (OuterVolumeSpecName: "kube-api-access-w6hzc") pod "50ef0678-fa8e-46f0-87b3-d4cd540ca293" (UID: "50ef0678-fa8e-46f0-87b3-d4cd540ca293"). InnerVolumeSpecName "kube-api-access-w6hzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.141471 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-config" (OuterVolumeSpecName: "config") pod "50ef0678-fa8e-46f0-87b3-d4cd540ca293" (UID: "50ef0678-fa8e-46f0-87b3-d4cd540ca293"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:41 crc kubenswrapper[4842]: E0202 07:04:41.150415 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-dns-svc podName:50ef0678-fa8e-46f0-87b3-d4cd540ca293 nodeName:}" failed. No retries permitted until 2026-02-02 07:04:41.650393206 +0000 UTC m=+1107.027661118 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "dns-svc" (UniqueName: "kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-dns-svc") pod "50ef0678-fa8e-46f0-87b3-d4cd540ca293" (UID: "50ef0678-fa8e-46f0-87b3-d4cd540ca293") : error deleting /var/lib/kubelet/pods/50ef0678-fa8e-46f0-87b3-d4cd540ca293/volume-subpaths: remove /var/lib/kubelet/pods/50ef0678-fa8e-46f0-87b3-d4cd540ca293/volume-subpaths: no such file or directory Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.150682 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "50ef0678-fa8e-46f0-87b3-d4cd540ca293" (UID: "50ef0678-fa8e-46f0-87b3-d4cd540ca293"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.189539 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.189568 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.189578 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6hzc\" (UniqueName: \"kubernetes.io/projected/50ef0678-fa8e-46f0-87b3-d4cd540ca293-kube-api-access-w6hzc\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.255162 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0ec7-account-create-update-x5rkz" event={"ID":"6601a68f-34a5-4629-ac74-97cb14e809f3","Type":"ContainerDied","Data":"1bfda80e82935159993cfdb80d57362500543b4c3a630820faa6ff4dbddd1689"} Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.255195 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bfda80e82935159993cfdb80d57362500543b4c3a630820faa6ff4dbddd1689" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.255273 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0ec7-account-create-update-x5rkz" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.266682 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.266742 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-ljcbj" event={"ID":"50ef0678-fa8e-46f0-87b3-d4cd540ca293","Type":"ContainerDied","Data":"b9a0d2e6281bc51140d03bbdf39c9959c34f9011131e35574e7085eb36300b4c"} Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.266784 4842 scope.go:117] "RemoveContainer" containerID="b9a0d2e6281bc51140d03bbdf39c9959c34f9011131e35574e7085eb36300b4c" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.266584 4842 generic.go:334] "Generic (PLEG): container finished" podID="50ef0678-fa8e-46f0-87b3-d4cd540ca293" containerID="b9a0d2e6281bc51140d03bbdf39c9959c34f9011131e35574e7085eb36300b4c" exitCode=0 Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.267104 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-ljcbj" event={"ID":"50ef0678-fa8e-46f0-87b3-d4cd540ca293","Type":"ContainerDied","Data":"5ea515418db439b7b85e9f81e72d96b594a2f4593445c0e76fd6508fbe9dc808"} Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.269695 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6ctcq" event={"ID":"4450e400-557b-4092-8f73-124910137dc4","Type":"ContainerDied","Data":"d8feadef768195e707bb4429851d853709421e83367cc73a361512bc437b5450"} Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.269723 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8feadef768195e707bb4429851d853709421e83367cc73a361512bc437b5450" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.269760 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6ctcq" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.289255 4842 scope.go:117] "RemoveContainer" containerID="9256a22e336903a02a75fd334630a8b5dba0a0037c179f024a9a59492a8a565b" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.311185 4842 scope.go:117] "RemoveContainer" containerID="b9a0d2e6281bc51140d03bbdf39c9959c34f9011131e35574e7085eb36300b4c" Feb 02 07:04:41 crc kubenswrapper[4842]: E0202 07:04:41.313792 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9a0d2e6281bc51140d03bbdf39c9959c34f9011131e35574e7085eb36300b4c\": container with ID starting with b9a0d2e6281bc51140d03bbdf39c9959c34f9011131e35574e7085eb36300b4c not found: ID does not exist" containerID="b9a0d2e6281bc51140d03bbdf39c9959c34f9011131e35574e7085eb36300b4c" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.313839 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9a0d2e6281bc51140d03bbdf39c9959c34f9011131e35574e7085eb36300b4c"} err="failed to get container status \"b9a0d2e6281bc51140d03bbdf39c9959c34f9011131e35574e7085eb36300b4c\": rpc error: code = NotFound desc = could not find container \"b9a0d2e6281bc51140d03bbdf39c9959c34f9011131e35574e7085eb36300b4c\": container with ID starting with b9a0d2e6281bc51140d03bbdf39c9959c34f9011131e35574e7085eb36300b4c not found: ID does not exist" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.313873 4842 scope.go:117] "RemoveContainer" containerID="9256a22e336903a02a75fd334630a8b5dba0a0037c179f024a9a59492a8a565b" Feb 02 07:04:41 crc kubenswrapper[4842]: E0202 07:04:41.314267 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9256a22e336903a02a75fd334630a8b5dba0a0037c179f024a9a59492a8a565b\": container with ID starting with 9256a22e336903a02a75fd334630a8b5dba0a0037c179f024a9a59492a8a565b not found: ID does not exist" containerID="9256a22e336903a02a75fd334630a8b5dba0a0037c179f024a9a59492a8a565b" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.314319 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9256a22e336903a02a75fd334630a8b5dba0a0037c179f024a9a59492a8a565b"} err="failed to get container status \"9256a22e336903a02a75fd334630a8b5dba0a0037c179f024a9a59492a8a565b\": rpc error: code = NotFound desc = could not find container \"9256a22e336903a02a75fd334630a8b5dba0a0037c179f024a9a59492a8a565b\": container with ID starting with 9256a22e336903a02a75fd334630a8b5dba0a0037c179f024a9a59492a8a565b not found: ID does not exist" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.376750 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qm2z9"] Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.382173 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-qm2z9"] Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.446970 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19378e36-9154-451c-88fe-dab4522aa0dc" path="/var/lib/kubelet/pods/19378e36-9154-451c-88fe-dab4522aa0dc/volumes" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.551174 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vsjtz" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.595592 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xns4j\" (UniqueName: \"kubernetes.io/projected/cf6c9856-8e0e-462e-a2bb-b21847078b54-kube-api-access-xns4j\") pod \"cf6c9856-8e0e-462e-a2bb-b21847078b54\" (UID: \"cf6c9856-8e0e-462e-a2bb-b21847078b54\") " Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.595767 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf6c9856-8e0e-462e-a2bb-b21847078b54-operator-scripts\") pod \"cf6c9856-8e0e-462e-a2bb-b21847078b54\" (UID: \"cf6c9856-8e0e-462e-a2bb-b21847078b54\") " Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.596620 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf6c9856-8e0e-462e-a2bb-b21847078b54-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cf6c9856-8e0e-462e-a2bb-b21847078b54" (UID: "cf6c9856-8e0e-462e-a2bb-b21847078b54"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.604482 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf6c9856-8e0e-462e-a2bb-b21847078b54-kube-api-access-xns4j" (OuterVolumeSpecName: "kube-api-access-xns4j") pod "cf6c9856-8e0e-462e-a2bb-b21847078b54" (UID: "cf6c9856-8e0e-462e-a2bb-b21847078b54"). InnerVolumeSpecName "kube-api-access-xns4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.666125 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85ce-account-create-update-rxmcp" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.674366 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2348-account-create-update-l9hwl" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.688606 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-p28sd" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.696788 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tbjg\" (UniqueName: \"kubernetes.io/projected/3f4b2578-8a31-4097-afd3-04bae6621094-kube-api-access-4tbjg\") pod \"3f4b2578-8a31-4097-afd3-04bae6621094\" (UID: \"3f4b2578-8a31-4097-afd3-04bae6621094\") " Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.696931 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f4b2578-8a31-4097-afd3-04bae6621094-operator-scripts\") pod \"3f4b2578-8a31-4097-afd3-04bae6621094\" (UID: \"3f4b2578-8a31-4097-afd3-04bae6621094\") " Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.697007 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-dns-svc\") pod \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.697397 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xns4j\" (UniqueName: \"kubernetes.io/projected/cf6c9856-8e0e-462e-a2bb-b21847078b54-kube-api-access-xns4j\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.697429 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf6c9856-8e0e-462e-a2bb-b21847078b54-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.697606 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f4b2578-8a31-4097-afd3-04bae6621094-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3f4b2578-8a31-4097-afd3-04bae6621094" (UID: "3f4b2578-8a31-4097-afd3-04bae6621094"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.698010 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "50ef0678-fa8e-46f0-87b3-d4cd540ca293" (UID: "50ef0678-fa8e-46f0-87b3-d4cd540ca293"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.700334 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f4b2578-8a31-4097-afd3-04bae6621094-kube-api-access-4tbjg" (OuterVolumeSpecName: "kube-api-access-4tbjg") pod "3f4b2578-8a31-4097-afd3-04bae6621094" (UID: "3f4b2578-8a31-4097-afd3-04bae6621094"). InnerVolumeSpecName "kube-api-access-4tbjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.798862 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31bf41ed-98c7-44ed-abba-93b74a546e71-operator-scripts\") pod \"31bf41ed-98c7-44ed-abba-93b74a546e71\" (UID: \"31bf41ed-98c7-44ed-abba-93b74a546e71\") " Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.798976 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7svj\" (UniqueName: \"kubernetes.io/projected/31bf41ed-98c7-44ed-abba-93b74a546e71-kube-api-access-t7svj\") pod \"31bf41ed-98c7-44ed-abba-93b74a546e71\" (UID: \"31bf41ed-98c7-44ed-abba-93b74a546e71\") " Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.799039 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef83800c-79dc-4cfa-9f7c-194a44995d12-operator-scripts\") pod \"ef83800c-79dc-4cfa-9f7c-194a44995d12\" (UID: \"ef83800c-79dc-4cfa-9f7c-194a44995d12\") " Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.799064 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxhpl\" (UniqueName: \"kubernetes.io/projected/ef83800c-79dc-4cfa-9f7c-194a44995d12-kube-api-access-hxhpl\") pod \"ef83800c-79dc-4cfa-9f7c-194a44995d12\" (UID: \"ef83800c-79dc-4cfa-9f7c-194a44995d12\") " Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.799335 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31bf41ed-98c7-44ed-abba-93b74a546e71-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "31bf41ed-98c7-44ed-abba-93b74a546e71" (UID: "31bf41ed-98c7-44ed-abba-93b74a546e71"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.799392 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tbjg\" (UniqueName: \"kubernetes.io/projected/3f4b2578-8a31-4097-afd3-04bae6621094-kube-api-access-4tbjg\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.799404 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f4b2578-8a31-4097-afd3-04bae6621094-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.799414 4842 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.799578 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef83800c-79dc-4cfa-9f7c-194a44995d12-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ef83800c-79dc-4cfa-9f7c-194a44995d12" (UID: "ef83800c-79dc-4cfa-9f7c-194a44995d12"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.802600 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef83800c-79dc-4cfa-9f7c-194a44995d12-kube-api-access-hxhpl" (OuterVolumeSpecName: "kube-api-access-hxhpl") pod "ef83800c-79dc-4cfa-9f7c-194a44995d12" (UID: "ef83800c-79dc-4cfa-9f7c-194a44995d12"). InnerVolumeSpecName "kube-api-access-hxhpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.802630 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31bf41ed-98c7-44ed-abba-93b74a546e71-kube-api-access-t7svj" (OuterVolumeSpecName: "kube-api-access-t7svj") pod "31bf41ed-98c7-44ed-abba-93b74a546e71" (UID: "31bf41ed-98c7-44ed-abba-93b74a546e71"). InnerVolumeSpecName "kube-api-access-t7svj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.900588 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef83800c-79dc-4cfa-9f7c-194a44995d12-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.900635 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxhpl\" (UniqueName: \"kubernetes.io/projected/ef83800c-79dc-4cfa-9f7c-194a44995d12-kube-api-access-hxhpl\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.900654 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31bf41ed-98c7-44ed-abba-93b74a546e71-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.900669 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7svj\" (UniqueName: \"kubernetes.io/projected/31bf41ed-98c7-44ed-abba-93b74a546e71-kube-api-access-t7svj\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.960762 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-ljcbj"] Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.973628 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-ljcbj"] Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.146463 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.146540 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.146601 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.147527 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fb1eaa0cb5ca379afdcc3758df45691293fe02d27ef7a46aa4f4235e0fb79a62"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.147629 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://fb1eaa0cb5ca379afdcc3758df45691293fe02d27ef7a46aa4f4235e0fb79a62" gracePeriod=600 Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.292578 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="fb1eaa0cb5ca379afdcc3758df45691293fe02d27ef7a46aa4f4235e0fb79a62" exitCode=0 Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.292651 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"fb1eaa0cb5ca379afdcc3758df45691293fe02d27ef7a46aa4f4235e0fb79a62"} Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.292728 4842 scope.go:117] "RemoveContainer" containerID="409dfa164f76008135fd93bb209c464e3603214d524a9798b15a0c8226203f93" Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.297876 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2348-account-create-update-l9hwl" event={"ID":"ef83800c-79dc-4cfa-9f7c-194a44995d12","Type":"ContainerDied","Data":"45bfcdc7da5be52f168e943bba23476495a7050157d4308d66afb8530a3e96bd"} Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.297958 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45bfcdc7da5be52f168e943bba23476495a7050157d4308d66afb8530a3e96bd" Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.297899 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2348-account-create-update-l9hwl" Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.299796 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85ce-account-create-update-rxmcp" event={"ID":"3f4b2578-8a31-4097-afd3-04bae6621094","Type":"ContainerDied","Data":"15cb3839393a80afe35c025ac6d4f112e276e4e995c843796ae616facfee62f2"} Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.300001 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15cb3839393a80afe35c025ac6d4f112e276e4e995c843796ae616facfee62f2" Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.300064 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85ce-account-create-update-rxmcp" Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.302930 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-p28sd" event={"ID":"31bf41ed-98c7-44ed-abba-93b74a546e71","Type":"ContainerDied","Data":"b54b449d9636044ec4aa3fc42dc49895933f5c104686edd5988476072faf577b"} Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.303015 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b54b449d9636044ec4aa3fc42dc49895933f5c104686edd5988476072faf577b" Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.303112 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-p28sd" Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.307872 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vsjtz" event={"ID":"cf6c9856-8e0e-462e-a2bb-b21847078b54","Type":"ContainerDied","Data":"5a286490efae1b2fcfd3289842091a1573875773e0e26817daf7cfeecd21545c"} Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.307903 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a286490efae1b2fcfd3289842091a1573875773e0e26817daf7cfeecd21545c" Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.307962 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vsjtz" Feb 02 07:04:42 crc kubenswrapper[4842]: E0202 07:04:42.533999 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f4b2578_8a31_4097_afd3_04bae6621094.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f4b2578_8a31_4097_afd3_04bae6621094.slice/crio-15cb3839393a80afe35c025ac6d4f112e276e4e995c843796ae616facfee62f2\": RecentStats: unable to find data in memory cache]" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.318682 4842 generic.go:334] "Generic (PLEG): container finished" podID="15fb5e79-8dd5-46ae-b8dd-6944cc810350" containerID="be09858b0b26720a1b1eb72e60d3de0b3dbd4ce4a7e6fc548a4d5f3d171165c8" exitCode=0 Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.318774 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-kbdxw" event={"ID":"15fb5e79-8dd5-46ae-b8dd-6944cc810350","Type":"ContainerDied","Data":"be09858b0b26720a1b1eb72e60d3de0b3dbd4ce4a7e6fc548a4d5f3d171165c8"} Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.323852 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"edc46ebafd92ce96bdf7451703c0e2c7fef67799fb2195e0085383b856862c49"} Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.461119 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50ef0678-fa8e-46f0-87b3-d4cd540ca293" path="/var/lib/kubelet/pods/50ef0678-fa8e-46f0-87b3-d4cd540ca293/volumes" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634008 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-7qxb9"] Feb 02 07:04:43 crc kubenswrapper[4842]: E0202 07:04:43.634384 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef83800c-79dc-4cfa-9f7c-194a44995d12" containerName="mariadb-account-create-update" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634400 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef83800c-79dc-4cfa-9f7c-194a44995d12" containerName="mariadb-account-create-update" Feb 02 07:04:43 crc kubenswrapper[4842]: E0202 07:04:43.634418 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19378e36-9154-451c-88fe-dab4522aa0dc" containerName="mariadb-account-create-update" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634426 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="19378e36-9154-451c-88fe-dab4522aa0dc" containerName="mariadb-account-create-update" Feb 02 07:04:43 crc kubenswrapper[4842]: E0202 07:04:43.634444 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f4b2578-8a31-4097-afd3-04bae6621094" containerName="mariadb-account-create-update" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634453 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f4b2578-8a31-4097-afd3-04bae6621094" containerName="mariadb-account-create-update" Feb 02 07:04:43 crc kubenswrapper[4842]: E0202 07:04:43.634463 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31bf41ed-98c7-44ed-abba-93b74a546e71" containerName="mariadb-database-create" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634471 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="31bf41ed-98c7-44ed-abba-93b74a546e71" containerName="mariadb-database-create" Feb 02 07:04:43 crc kubenswrapper[4842]: E0202 07:04:43.634485 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50ef0678-fa8e-46f0-87b3-d4cd540ca293" containerName="init" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634493 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="50ef0678-fa8e-46f0-87b3-d4cd540ca293" containerName="init" Feb 02 07:04:43 crc kubenswrapper[4842]: E0202 07:04:43.634506 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4450e400-557b-4092-8f73-124910137dc4" containerName="mariadb-database-create" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634514 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="4450e400-557b-4092-8f73-124910137dc4" containerName="mariadb-database-create" Feb 02 07:04:43 crc kubenswrapper[4842]: E0202 07:04:43.634524 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6601a68f-34a5-4629-ac74-97cb14e809f3" containerName="mariadb-account-create-update" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634532 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="6601a68f-34a5-4629-ac74-97cb14e809f3" containerName="mariadb-account-create-update" Feb 02 07:04:43 crc kubenswrapper[4842]: E0202 07:04:43.634548 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf6c9856-8e0e-462e-a2bb-b21847078b54" containerName="mariadb-database-create" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634555 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf6c9856-8e0e-462e-a2bb-b21847078b54" containerName="mariadb-database-create" Feb 02 07:04:43 crc kubenswrapper[4842]: E0202 07:04:43.634575 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50ef0678-fa8e-46f0-87b3-d4cd540ca293" containerName="dnsmasq-dns" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634583 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="50ef0678-fa8e-46f0-87b3-d4cd540ca293" containerName="dnsmasq-dns" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634749 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="50ef0678-fa8e-46f0-87b3-d4cd540ca293" containerName="dnsmasq-dns" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634762 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="31bf41ed-98c7-44ed-abba-93b74a546e71" containerName="mariadb-database-create" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634774 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="4450e400-557b-4092-8f73-124910137dc4" containerName="mariadb-database-create" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634785 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="6601a68f-34a5-4629-ac74-97cb14e809f3" containerName="mariadb-account-create-update" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634794 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f4b2578-8a31-4097-afd3-04bae6621094" containerName="mariadb-account-create-update" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634806 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef83800c-79dc-4cfa-9f7c-194a44995d12" containerName="mariadb-account-create-update" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634824 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf6c9856-8e0e-462e-a2bb-b21847078b54" containerName="mariadb-database-create" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634839 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="19378e36-9154-451c-88fe-dab4522aa0dc" containerName="mariadb-account-create-update" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.635422 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7qxb9" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.638984 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.646627 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fpq5h" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.650024 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-7qxb9"] Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.729850 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-db-sync-config-data\") pod \"glance-db-sync-7qxb9\" (UID: \"b8cd42ce-4a62-486b-9571-58d789ca2d38\") " pod="openstack/glance-db-sync-7qxb9" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.729919 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk4lh\" (UniqueName: \"kubernetes.io/projected/b8cd42ce-4a62-486b-9571-58d789ca2d38-kube-api-access-xk4lh\") pod \"glance-db-sync-7qxb9\" (UID: \"b8cd42ce-4a62-486b-9571-58d789ca2d38\") " pod="openstack/glance-db-sync-7qxb9" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.729981 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-config-data\") pod \"glance-db-sync-7qxb9\" (UID: \"b8cd42ce-4a62-486b-9571-58d789ca2d38\") " pod="openstack/glance-db-sync-7qxb9" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.730148 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-combined-ca-bundle\") pod \"glance-db-sync-7qxb9\" (UID: \"b8cd42ce-4a62-486b-9571-58d789ca2d38\") " pod="openstack/glance-db-sync-7qxb9" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.831774 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-db-sync-config-data\") pod \"glance-db-sync-7qxb9\" (UID: \"b8cd42ce-4a62-486b-9571-58d789ca2d38\") " pod="openstack/glance-db-sync-7qxb9" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.831823 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk4lh\" (UniqueName: \"kubernetes.io/projected/b8cd42ce-4a62-486b-9571-58d789ca2d38-kube-api-access-xk4lh\") pod \"glance-db-sync-7qxb9\" (UID: \"b8cd42ce-4a62-486b-9571-58d789ca2d38\") " pod="openstack/glance-db-sync-7qxb9" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.831853 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-config-data\") pod \"glance-db-sync-7qxb9\" (UID: \"b8cd42ce-4a62-486b-9571-58d789ca2d38\") " pod="openstack/glance-db-sync-7qxb9" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.831938 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-combined-ca-bundle\") pod \"glance-db-sync-7qxb9\" (UID: \"b8cd42ce-4a62-486b-9571-58d789ca2d38\") " pod="openstack/glance-db-sync-7qxb9" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.838569 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-config-data\") pod \"glance-db-sync-7qxb9\" (UID: \"b8cd42ce-4a62-486b-9571-58d789ca2d38\") " pod="openstack/glance-db-sync-7qxb9" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.839991 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-db-sync-config-data\") pod \"glance-db-sync-7qxb9\" (UID: \"b8cd42ce-4a62-486b-9571-58d789ca2d38\") " pod="openstack/glance-db-sync-7qxb9" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.851348 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-combined-ca-bundle\") pod \"glance-db-sync-7qxb9\" (UID: \"b8cd42ce-4a62-486b-9571-58d789ca2d38\") " pod="openstack/glance-db-sync-7qxb9" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.852034 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk4lh\" (UniqueName: \"kubernetes.io/projected/b8cd42ce-4a62-486b-9571-58d789ca2d38-kube-api-access-xk4lh\") pod \"glance-db-sync-7qxb9\" (UID: \"b8cd42ce-4a62-486b-9571-58d789ca2d38\") " pod="openstack/glance-db-sync-7qxb9" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.968098 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7qxb9" Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.338156 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-7qxb9"] Feb 02 07:04:44 crc kubenswrapper[4842]: W0202 07:04:44.343338 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8cd42ce_4a62_486b_9571_58d789ca2d38.slice/crio-6be05ab16b17ac589bed2256313d7469b8679adc5a207e3a3668b1acb8265f52 WatchSource:0}: Error finding container 6be05ab16b17ac589bed2256313d7469b8679adc5a207e3a3668b1acb8265f52: Status 404 returned error can't find the container with id 6be05ab16b17ac589bed2256313d7469b8679adc5a207e3a3668b1acb8265f52 Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.582115 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.646836 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15fb5e79-8dd5-46ae-b8dd-6944cc810350-scripts\") pod \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.646923 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/15fb5e79-8dd5-46ae-b8dd-6944cc810350-ring-data-devices\") pod \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.646972 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/15fb5e79-8dd5-46ae-b8dd-6944cc810350-etc-swift\") pod \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.647021 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4zkn\" (UniqueName: \"kubernetes.io/projected/15fb5e79-8dd5-46ae-b8dd-6944cc810350-kube-api-access-p4zkn\") pod \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.647060 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-dispersionconf\") pod \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.647097 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-swiftconf\") pod \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.647141 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-combined-ca-bundle\") pod \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.647917 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15fb5e79-8dd5-46ae-b8dd-6944cc810350-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "15fb5e79-8dd5-46ae-b8dd-6944cc810350" (UID: "15fb5e79-8dd5-46ae-b8dd-6944cc810350"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.648255 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15fb5e79-8dd5-46ae-b8dd-6944cc810350-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "15fb5e79-8dd5-46ae-b8dd-6944cc810350" (UID: "15fb5e79-8dd5-46ae-b8dd-6944cc810350"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.653284 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15fb5e79-8dd5-46ae-b8dd-6944cc810350-kube-api-access-p4zkn" (OuterVolumeSpecName: "kube-api-access-p4zkn") pod "15fb5e79-8dd5-46ae-b8dd-6944cc810350" (UID: "15fb5e79-8dd5-46ae-b8dd-6944cc810350"). InnerVolumeSpecName "kube-api-access-p4zkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.655514 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "15fb5e79-8dd5-46ae-b8dd-6944cc810350" (UID: "15fb5e79-8dd5-46ae-b8dd-6944cc810350"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.669496 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15fb5e79-8dd5-46ae-b8dd-6944cc810350-scripts" (OuterVolumeSpecName: "scripts") pod "15fb5e79-8dd5-46ae-b8dd-6944cc810350" (UID: "15fb5e79-8dd5-46ae-b8dd-6944cc810350"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.670341 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15fb5e79-8dd5-46ae-b8dd-6944cc810350" (UID: "15fb5e79-8dd5-46ae-b8dd-6944cc810350"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.671446 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "15fb5e79-8dd5-46ae-b8dd-6944cc810350" (UID: "15fb5e79-8dd5-46ae-b8dd-6944cc810350"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.748923 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.748964 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15fb5e79-8dd5-46ae-b8dd-6944cc810350-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.748977 4842 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/15fb5e79-8dd5-46ae-b8dd-6944cc810350-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.748992 4842 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/15fb5e79-8dd5-46ae-b8dd-6944cc810350-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.749004 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4zkn\" (UniqueName: \"kubernetes.io/projected/15fb5e79-8dd5-46ae-b8dd-6944cc810350-kube-api-access-p4zkn\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.749017 4842 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.749028 4842 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:45 crc kubenswrapper[4842]: I0202 07:04:45.343432 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-kbdxw" event={"ID":"15fb5e79-8dd5-46ae-b8dd-6944cc810350","Type":"ContainerDied","Data":"1aa25f7ce59beabc543eaca2151f7fe5af27722fc7175abe6c90cab123aefade"} Feb 02 07:04:45 crc kubenswrapper[4842]: I0202 07:04:45.343465 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:45 crc kubenswrapper[4842]: I0202 07:04:45.343512 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1aa25f7ce59beabc543eaca2151f7fe5af27722fc7175abe6c90cab123aefade" Feb 02 07:04:45 crc kubenswrapper[4842]: I0202 07:04:45.344759 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7qxb9" event={"ID":"b8cd42ce-4a62-486b-9571-58d789ca2d38","Type":"ContainerStarted","Data":"6be05ab16b17ac589bed2256313d7469b8679adc5a207e3a3668b1acb8265f52"} Feb 02 07:04:46 crc kubenswrapper[4842]: I0202 07:04:46.381176 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-h2lm5"] Feb 02 07:04:46 crc kubenswrapper[4842]: E0202 07:04:46.381921 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15fb5e79-8dd5-46ae-b8dd-6944cc810350" containerName="swift-ring-rebalance" Feb 02 07:04:46 crc kubenswrapper[4842]: I0202 07:04:46.381938 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="15fb5e79-8dd5-46ae-b8dd-6944cc810350" containerName="swift-ring-rebalance" Feb 02 07:04:46 crc kubenswrapper[4842]: I0202 07:04:46.382138 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="15fb5e79-8dd5-46ae-b8dd-6944cc810350" containerName="swift-ring-rebalance" Feb 02 07:04:46 crc kubenswrapper[4842]: I0202 07:04:46.382770 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h2lm5" Feb 02 07:04:46 crc kubenswrapper[4842]: I0202 07:04:46.386595 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 02 07:04:46 crc kubenswrapper[4842]: I0202 07:04:46.398547 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-h2lm5"] Feb 02 07:04:46 crc kubenswrapper[4842]: I0202 07:04:46.474529 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59nqv\" (UniqueName: \"kubernetes.io/projected/e0cbe107-ad1a-47aa-9b91-4a08c8b712fb-kube-api-access-59nqv\") pod \"root-account-create-update-h2lm5\" (UID: \"e0cbe107-ad1a-47aa-9b91-4a08c8b712fb\") " pod="openstack/root-account-create-update-h2lm5" Feb 02 07:04:46 crc kubenswrapper[4842]: I0202 07:04:46.474622 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0cbe107-ad1a-47aa-9b91-4a08c8b712fb-operator-scripts\") pod \"root-account-create-update-h2lm5\" (UID: \"e0cbe107-ad1a-47aa-9b91-4a08c8b712fb\") " pod="openstack/root-account-create-update-h2lm5" Feb 02 07:04:46 crc kubenswrapper[4842]: I0202 07:04:46.576877 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0cbe107-ad1a-47aa-9b91-4a08c8b712fb-operator-scripts\") pod \"root-account-create-update-h2lm5\" (UID: \"e0cbe107-ad1a-47aa-9b91-4a08c8b712fb\") " pod="openstack/root-account-create-update-h2lm5" Feb 02 07:04:46 crc kubenswrapper[4842]: I0202 07:04:46.577125 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59nqv\" (UniqueName: \"kubernetes.io/projected/e0cbe107-ad1a-47aa-9b91-4a08c8b712fb-kube-api-access-59nqv\") pod \"root-account-create-update-h2lm5\" (UID: \"e0cbe107-ad1a-47aa-9b91-4a08c8b712fb\") " pod="openstack/root-account-create-update-h2lm5" Feb 02 07:04:46 crc kubenswrapper[4842]: I0202 07:04:46.578000 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0cbe107-ad1a-47aa-9b91-4a08c8b712fb-operator-scripts\") pod \"root-account-create-update-h2lm5\" (UID: \"e0cbe107-ad1a-47aa-9b91-4a08c8b712fb\") " pod="openstack/root-account-create-update-h2lm5" Feb 02 07:04:46 crc kubenswrapper[4842]: I0202 07:04:46.601003 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59nqv\" (UniqueName: \"kubernetes.io/projected/e0cbe107-ad1a-47aa-9b91-4a08c8b712fb-kube-api-access-59nqv\") pod \"root-account-create-update-h2lm5\" (UID: \"e0cbe107-ad1a-47aa-9b91-4a08c8b712fb\") " pod="openstack/root-account-create-update-h2lm5" Feb 02 07:04:46 crc kubenswrapper[4842]: I0202 07:04:46.707836 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h2lm5" Feb 02 07:04:47 crc kubenswrapper[4842]: I0202 07:04:47.084639 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:47 crc kubenswrapper[4842]: I0202 07:04:47.093109 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:47 crc kubenswrapper[4842]: I0202 07:04:47.146980 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-h2lm5"] Feb 02 07:04:47 crc kubenswrapper[4842]: W0202 07:04:47.151501 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0cbe107_ad1a_47aa_9b91_4a08c8b712fb.slice/crio-81b2a5546beb19ff9cd7c9100f20f94d4b1c03559214b6eacc4130c8dc3472a6 WatchSource:0}: Error finding container 81b2a5546beb19ff9cd7c9100f20f94d4b1c03559214b6eacc4130c8dc3472a6: Status 404 returned error can't find the container with id 81b2a5546beb19ff9cd7c9100f20f94d4b1c03559214b6eacc4130c8dc3472a6 Feb 02 07:04:47 crc kubenswrapper[4842]: I0202 07:04:47.247536 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 02 07:04:47 crc kubenswrapper[4842]: I0202 07:04:47.366189 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-h2lm5" event={"ID":"e0cbe107-ad1a-47aa-9b91-4a08c8b712fb","Type":"ContainerStarted","Data":"baa67ddc95fed558f7c865e018c407b7a90c8fd196753967451af639f1b0851e"} Feb 02 07:04:47 crc kubenswrapper[4842]: I0202 07:04:47.366615 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-h2lm5" event={"ID":"e0cbe107-ad1a-47aa-9b91-4a08c8b712fb","Type":"ContainerStarted","Data":"81b2a5546beb19ff9cd7c9100f20f94d4b1c03559214b6eacc4130c8dc3472a6"} Feb 02 07:04:47 crc kubenswrapper[4842]: I0202 07:04:47.386200 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-h2lm5" podStartSLOduration=1.386185854 podStartE2EDuration="1.386185854s" podCreationTimestamp="2026-02-02 07:04:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:04:47.381270443 +0000 UTC m=+1112.758538365" watchObservedRunningTime="2026-02-02 07:04:47.386185854 +0000 UTC m=+1112.763453766" Feb 02 07:04:47 crc kubenswrapper[4842]: I0202 07:04:47.802623 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 02 07:04:48 crc kubenswrapper[4842]: I0202 07:04:48.377047 4842 generic.go:334] "Generic (PLEG): container finished" podID="e0cbe107-ad1a-47aa-9b91-4a08c8b712fb" containerID="baa67ddc95fed558f7c865e018c407b7a90c8fd196753967451af639f1b0851e" exitCode=0 Feb 02 07:04:48 crc kubenswrapper[4842]: I0202 07:04:48.377105 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-h2lm5" event={"ID":"e0cbe107-ad1a-47aa-9b91-4a08c8b712fb","Type":"ContainerDied","Data":"baa67ddc95fed558f7c865e018c407b7a90c8fd196753967451af639f1b0851e"} Feb 02 07:04:48 crc kubenswrapper[4842]: I0202 07:04:48.379259 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerStarted","Data":"ab889a1e60a176a5157cbf2492af02320a93e4b8f19cc77b84445a221a0d1b90"} Feb 02 07:04:49 crc kubenswrapper[4842]: I0202 07:04:49.387113 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerStarted","Data":"496f7c8f3a8e1190f069f9d123dad4f03c5ddc2c339a3a530d938ce75113f766"} Feb 02 07:04:49 crc kubenswrapper[4842]: I0202 07:04:49.782257 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 02 07:04:53 crc kubenswrapper[4842]: I0202 07:04:53.820006 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sgwrm" podUID="e467a49f-fdc1-4a9e-9907-4425f5ec6177" containerName="ovn-controller" probeResult="failure" output=< Feb 02 07:04:53 crc kubenswrapper[4842]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 02 07:04:53 crc kubenswrapper[4842]: > Feb 02 07:04:53 crc kubenswrapper[4842]: I0202 07:04:53.837796 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:53 crc kubenswrapper[4842]: I0202 07:04:53.854352 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.082799 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sgwrm-config-hhzx8"] Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.084032 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.086585 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.112058 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sgwrm-config-hhzx8"] Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.262868 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36369d86-4106-4626-9771-c63ca46e2b3e-scripts\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.262970 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/36369d86-4106-4626-9771-c63ca46e2b3e-additional-scripts\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.263029 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-run-ovn\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.263048 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-run\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.263065 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98sxk\" (UniqueName: \"kubernetes.io/projected/36369d86-4106-4626-9771-c63ca46e2b3e-kube-api-access-98sxk\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.263195 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-log-ovn\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.365096 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36369d86-4106-4626-9771-c63ca46e2b3e-scripts\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.365156 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/36369d86-4106-4626-9771-c63ca46e2b3e-additional-scripts\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.365194 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-run-ovn\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.365228 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98sxk\" (UniqueName: \"kubernetes.io/projected/36369d86-4106-4626-9771-c63ca46e2b3e-kube-api-access-98sxk\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.365244 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-run\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.365266 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-log-ovn\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.365622 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-run-ovn\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.365682 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-log-ovn\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.365754 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-run\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.366703 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/36369d86-4106-4626-9771-c63ca46e2b3e-additional-scripts\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.367591 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36369d86-4106-4626-9771-c63ca46e2b3e-scripts\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.404110 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98sxk\" (UniqueName: \"kubernetes.io/projected/36369d86-4106-4626-9771-c63ca46e2b3e-kube-api-access-98sxk\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.413705 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:55 crc kubenswrapper[4842]: I0202 07:04:55.450442 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-h2lm5" event={"ID":"e0cbe107-ad1a-47aa-9b91-4a08c8b712fb","Type":"ContainerDied","Data":"81b2a5546beb19ff9cd7c9100f20f94d4b1c03559214b6eacc4130c8dc3472a6"} Feb 02 07:04:55 crc kubenswrapper[4842]: I0202 07:04:55.450767 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81b2a5546beb19ff9cd7c9100f20f94d4b1c03559214b6eacc4130c8dc3472a6" Feb 02 07:04:55 crc kubenswrapper[4842]: I0202 07:04:55.491557 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h2lm5" Feb 02 07:04:55 crc kubenswrapper[4842]: I0202 07:04:55.594615 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0cbe107-ad1a-47aa-9b91-4a08c8b712fb-operator-scripts\") pod \"e0cbe107-ad1a-47aa-9b91-4a08c8b712fb\" (UID: \"e0cbe107-ad1a-47aa-9b91-4a08c8b712fb\") " Feb 02 07:04:55 crc kubenswrapper[4842]: I0202 07:04:55.594810 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59nqv\" (UniqueName: \"kubernetes.io/projected/e0cbe107-ad1a-47aa-9b91-4a08c8b712fb-kube-api-access-59nqv\") pod \"e0cbe107-ad1a-47aa-9b91-4a08c8b712fb\" (UID: \"e0cbe107-ad1a-47aa-9b91-4a08c8b712fb\") " Feb 02 07:04:55 crc kubenswrapper[4842]: I0202 07:04:55.595614 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0cbe107-ad1a-47aa-9b91-4a08c8b712fb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e0cbe107-ad1a-47aa-9b91-4a08c8b712fb" (UID: "e0cbe107-ad1a-47aa-9b91-4a08c8b712fb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:55 crc kubenswrapper[4842]: I0202 07:04:55.601719 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0cbe107-ad1a-47aa-9b91-4a08c8b712fb-kube-api-access-59nqv" (OuterVolumeSpecName: "kube-api-access-59nqv") pod "e0cbe107-ad1a-47aa-9b91-4a08c8b712fb" (UID: "e0cbe107-ad1a-47aa-9b91-4a08c8b712fb"). InnerVolumeSpecName "kube-api-access-59nqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:04:55 crc kubenswrapper[4842]: I0202 07:04:55.696438 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59nqv\" (UniqueName: \"kubernetes.io/projected/e0cbe107-ad1a-47aa-9b91-4a08c8b712fb-kube-api-access-59nqv\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:55 crc kubenswrapper[4842]: I0202 07:04:55.696462 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0cbe107-ad1a-47aa-9b91-4a08c8b712fb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:55 crc kubenswrapper[4842]: I0202 07:04:55.885582 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sgwrm-config-hhzx8"] Feb 02 07:04:55 crc kubenswrapper[4842]: W0202 07:04:55.895425 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36369d86_4106_4626_9771_c63ca46e2b3e.slice/crio-524dac2f02dc48d1fd595c5281320196026031f7d307b89e14bd1fb64ef0c5c5 WatchSource:0}: Error finding container 524dac2f02dc48d1fd595c5281320196026031f7d307b89e14bd1fb64ef0c5c5: Status 404 returned error can't find the container with id 524dac2f02dc48d1fd595c5281320196026031f7d307b89e14bd1fb64ef0c5c5 Feb 02 07:04:56 crc kubenswrapper[4842]: I0202 07:04:56.494510 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7qxb9" event={"ID":"b8cd42ce-4a62-486b-9571-58d789ca2d38","Type":"ContainerStarted","Data":"f28dfbf8c174cb46df97e4d7d6b844e785a2d8671506e1ebb71b67017e08a6b8"} Feb 02 07:04:56 crc kubenswrapper[4842]: I0202 07:04:56.496450 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sgwrm-config-hhzx8" event={"ID":"36369d86-4106-4626-9771-c63ca46e2b3e","Type":"ContainerStarted","Data":"524dac2f02dc48d1fd595c5281320196026031f7d307b89e14bd1fb64ef0c5c5"} Feb 02 07:04:56 crc kubenswrapper[4842]: I0202 07:04:56.499631 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerStarted","Data":"1864c37f5464bef32be4591740d73c6be777716e778338b57e2c23f30b098973"} Feb 02 07:04:56 crc kubenswrapper[4842]: I0202 07:04:56.499653 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h2lm5" Feb 02 07:04:56 crc kubenswrapper[4842]: I0202 07:04:56.499663 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerStarted","Data":"81e3b07657ef3f1d8e0c81f783b14b3167b42779f998c664f2c184857a6ffc8b"} Feb 02 07:04:56 crc kubenswrapper[4842]: I0202 07:04:56.499675 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerStarted","Data":"0579b6675bbca573212a34273ea354bc485d0dead5d30e277230eaf0ce0b9594"} Feb 02 07:04:57 crc kubenswrapper[4842]: I0202 07:04:57.509306 4842 generic.go:334] "Generic (PLEG): container finished" podID="36369d86-4106-4626-9771-c63ca46e2b3e" containerID="59526756b474c2762ebc0f7a6578c91c40cc272db00fa72f3384382706ed53e2" exitCode=0 Feb 02 07:04:57 crc kubenswrapper[4842]: I0202 07:04:57.511381 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sgwrm-config-hhzx8" event={"ID":"36369d86-4106-4626-9771-c63ca46e2b3e","Type":"ContainerDied","Data":"59526756b474c2762ebc0f7a6578c91c40cc272db00fa72f3384382706ed53e2"} Feb 02 07:04:57 crc kubenswrapper[4842]: I0202 07:04:57.532572 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-7qxb9" podStartSLOduration=3.371759485 podStartE2EDuration="14.532555972s" podCreationTimestamp="2026-02-02 07:04:43 +0000 UTC" firstStartedPulling="2026-02-02 07:04:44.349966372 +0000 UTC m=+1109.727234284" lastFinishedPulling="2026-02-02 07:04:55.510762829 +0000 UTC m=+1120.888030771" observedRunningTime="2026-02-02 07:04:57.528190325 +0000 UTC m=+1122.905458237" watchObservedRunningTime="2026-02-02 07:04:57.532555972 +0000 UTC m=+1122.909823884" Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.531009 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerStarted","Data":"94a480917554fbdc9c94fdc240db04a25556fac19911eb5945a6838a7169e5f3"} Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.531585 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerStarted","Data":"98d05e29848a090df093dcb34910845ebd22086e918c4b510210550b0fcd98f9"} Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.531609 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerStarted","Data":"84a64916ad5a870dd2730290e371bd4ee7a327af7bfa716ae7b3457657e3b792"} Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.531628 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerStarted","Data":"78ea2470e0bb66602235ee6f953b1cb50c60bbf2dda3d60aa9ded3436730161c"} Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.834940 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.921549 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.959716 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36369d86-4106-4626-9771-c63ca46e2b3e-scripts\") pod \"36369d86-4106-4626-9771-c63ca46e2b3e\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.959901 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98sxk\" (UniqueName: \"kubernetes.io/projected/36369d86-4106-4626-9771-c63ca46e2b3e-kube-api-access-98sxk\") pod \"36369d86-4106-4626-9771-c63ca46e2b3e\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.959973 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-run-ovn\") pod \"36369d86-4106-4626-9771-c63ca46e2b3e\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.960012 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-log-ovn\") pod \"36369d86-4106-4626-9771-c63ca46e2b3e\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.960058 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/36369d86-4106-4626-9771-c63ca46e2b3e-additional-scripts\") pod \"36369d86-4106-4626-9771-c63ca46e2b3e\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.960094 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-run\") pod \"36369d86-4106-4626-9771-c63ca46e2b3e\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.960475 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-run" (OuterVolumeSpecName: "var-run") pod "36369d86-4106-4626-9771-c63ca46e2b3e" (UID: "36369d86-4106-4626-9771-c63ca46e2b3e"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.961421 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36369d86-4106-4626-9771-c63ca46e2b3e-scripts" (OuterVolumeSpecName: "scripts") pod "36369d86-4106-4626-9771-c63ca46e2b3e" (UID: "36369d86-4106-4626-9771-c63ca46e2b3e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.961795 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "36369d86-4106-4626-9771-c63ca46e2b3e" (UID: "36369d86-4106-4626-9771-c63ca46e2b3e"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.961878 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "36369d86-4106-4626-9771-c63ca46e2b3e" (UID: "36369d86-4106-4626-9771-c63ca46e2b3e"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.962528 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36369d86-4106-4626-9771-c63ca46e2b3e-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "36369d86-4106-4626-9771-c63ca46e2b3e" (UID: "36369d86-4106-4626-9771-c63ca46e2b3e"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.979121 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36369d86-4106-4626-9771-c63ca46e2b3e-kube-api-access-98sxk" (OuterVolumeSpecName: "kube-api-access-98sxk") pod "36369d86-4106-4626-9771-c63ca46e2b3e" (UID: "36369d86-4106-4626-9771-c63ca46e2b3e"). InnerVolumeSpecName "kube-api-access-98sxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:04:59 crc kubenswrapper[4842]: I0202 07:04:59.061855 4842 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:59 crc kubenswrapper[4842]: I0202 07:04:59.061883 4842 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/36369d86-4106-4626-9771-c63ca46e2b3e-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:59 crc kubenswrapper[4842]: I0202 07:04:59.061894 4842 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-run\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:59 crc kubenswrapper[4842]: I0202 07:04:59.061903 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36369d86-4106-4626-9771-c63ca46e2b3e-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:59 crc kubenswrapper[4842]: I0202 07:04:59.061912 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98sxk\" (UniqueName: \"kubernetes.io/projected/36369d86-4106-4626-9771-c63ca46e2b3e-kube-api-access-98sxk\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:59 crc kubenswrapper[4842]: I0202 07:04:59.061920 4842 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:59 crc kubenswrapper[4842]: I0202 07:04:59.539889 4842 generic.go:334] "Generic (PLEG): container finished" podID="441d47f7-e5dd-456f-b6fa-10a642be6742" containerID="15488c5f14bed733c354b136f5f9b0303d01f42120de21fa2a655d19a2d681ef" exitCode=0 Feb 02 07:04:59 crc kubenswrapper[4842]: I0202 07:04:59.540132 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"441d47f7-e5dd-456f-b6fa-10a642be6742","Type":"ContainerDied","Data":"15488c5f14bed733c354b136f5f9b0303d01f42120de21fa2a655d19a2d681ef"} Feb 02 07:04:59 crc kubenswrapper[4842]: I0202 07:04:59.561551 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerStarted","Data":"5fe6ac9847ee5629c3a3a2ccb929b05946534e86d95fae65cd97cbab654c7391"} Feb 02 07:04:59 crc kubenswrapper[4842]: I0202 07:04:59.564208 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sgwrm-config-hhzx8" event={"ID":"36369d86-4106-4626-9771-c63ca46e2b3e","Type":"ContainerDied","Data":"524dac2f02dc48d1fd595c5281320196026031f7d307b89e14bd1fb64ef0c5c5"} Feb 02 07:04:59 crc kubenswrapper[4842]: I0202 07:04:59.564263 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="524dac2f02dc48d1fd595c5281320196026031f7d307b89e14bd1fb64ef0c5c5" Feb 02 07:04:59 crc kubenswrapper[4842]: I0202 07:04:59.564362 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:05:00 crc kubenswrapper[4842]: I0202 07:05:00.048403 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sgwrm-config-hhzx8"] Feb 02 07:05:00 crc kubenswrapper[4842]: I0202 07:05:00.056731 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-sgwrm-config-hhzx8"] Feb 02 07:05:00 crc kubenswrapper[4842]: I0202 07:05:00.574578 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"441d47f7-e5dd-456f-b6fa-10a642be6742","Type":"ContainerStarted","Data":"3913ec835fcef00ab7ba5cfa0bb102b1d808857fbee96be0da99ede67f9672b5"} Feb 02 07:05:00 crc kubenswrapper[4842]: I0202 07:05:00.575148 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:05:00 crc kubenswrapper[4842]: I0202 07:05:00.580431 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerStarted","Data":"419e27de3686d1a75400d18f391cbe54519868631357cce324a86c057a1dbbfe"} Feb 02 07:05:00 crc kubenswrapper[4842]: I0202 07:05:00.580457 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerStarted","Data":"c3ceba27f85cf9e18b4c96e9c35e3e830a3840e245ff37876679745418c599df"} Feb 02 07:05:00 crc kubenswrapper[4842]: I0202 07:05:00.580757 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerStarted","Data":"11c87109b1d73f0312d44a7a194b500b7f7e551073a65468bc291891955fd1d1"} Feb 02 07:05:00 crc kubenswrapper[4842]: I0202 07:05:00.580771 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerStarted","Data":"3accf74226bf0263e16fdcc906f97a58d41768cb604252689a8c7a9fac50f04f"} Feb 02 07:05:00 crc kubenswrapper[4842]: I0202 07:05:00.580779 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerStarted","Data":"a6f0be0e71192334da01f394f7e0075f3ff472a60d737f40449f0c7c56b45801"} Feb 02 07:05:00 crc kubenswrapper[4842]: I0202 07:05:00.582080 4842 generic.go:334] "Generic (PLEG): container finished" podID="2b2ca532-dbbc-4148-8d2f-fc474685f0bd" containerID="6c31731dd55c0106a8a51f84c9feb372cb01a4a0f209022835cbd8f0c40ce80b" exitCode=0 Feb 02 07:05:00 crc kubenswrapper[4842]: I0202 07:05:00.582114 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2b2ca532-dbbc-4148-8d2f-fc474685f0bd","Type":"ContainerDied","Data":"6c31731dd55c0106a8a51f84c9feb372cb01a4a0f209022835cbd8f0c40ce80b"} Feb 02 07:05:00 crc kubenswrapper[4842]: I0202 07:05:00.607507 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.160761207 podStartE2EDuration="1m7.607487198s" podCreationTimestamp="2026-02-02 07:03:53 +0000 UTC" firstStartedPulling="2026-02-02 07:03:55.432933717 +0000 UTC m=+1060.810201629" lastFinishedPulling="2026-02-02 07:04:25.879659698 +0000 UTC m=+1091.256927620" observedRunningTime="2026-02-02 07:05:00.603266134 +0000 UTC m=+1125.980534066" watchObservedRunningTime="2026-02-02 07:05:00.607487198 +0000 UTC m=+1125.984755110" Feb 02 07:05:01 crc kubenswrapper[4842]: I0202 07:05:01.443093 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36369d86-4106-4626-9771-c63ca46e2b3e" path="/var/lib/kubelet/pods/36369d86-4106-4626-9771-c63ca46e2b3e/volumes" Feb 02 07:05:01 crc kubenswrapper[4842]: I0202 07:05:01.597060 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerStarted","Data":"a0ba4c6bbf6b05d401f52ab663d9f47cbde0cebb5dfcb8997ff120cffdd05060"} Feb 02 07:05:01 crc kubenswrapper[4842]: I0202 07:05:01.600065 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2b2ca532-dbbc-4148-8d2f-fc474685f0bd","Type":"ContainerStarted","Data":"384f2467730e80d894550b124ee5d4d50ba8cf40b6a9c5e38ab8a7bf9548ea2d"} Feb 02 07:05:01 crc kubenswrapper[4842]: I0202 07:05:01.600428 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 02 07:05:01 crc kubenswrapper[4842]: I0202 07:05:01.665617 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=21.186365561 podStartE2EDuration="32.665600502s" podCreationTimestamp="2026-02-02 07:04:29 +0000 UTC" firstStartedPulling="2026-02-02 07:04:47.805887222 +0000 UTC m=+1113.183155134" lastFinishedPulling="2026-02-02 07:04:59.285122163 +0000 UTC m=+1124.662390075" observedRunningTime="2026-02-02 07:05:01.658035565 +0000 UTC m=+1127.035303477" watchObservedRunningTime="2026-02-02 07:05:01.665600502 +0000 UTC m=+1127.042868414" Feb 02 07:05:01 crc kubenswrapper[4842]: I0202 07:05:01.685267 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371967.169525 podStartE2EDuration="1m9.685249946s" podCreationTimestamp="2026-02-02 07:03:52 +0000 UTC" firstStartedPulling="2026-02-02 07:03:54.578804137 +0000 UTC m=+1059.956072049" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:01.679479484 +0000 UTC m=+1127.056747436" watchObservedRunningTime="2026-02-02 07:05:01.685249946 +0000 UTC m=+1127.062517858" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.051406 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-fn7dr"] Feb 02 07:05:02 crc kubenswrapper[4842]: E0202 07:05:02.051714 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36369d86-4106-4626-9771-c63ca46e2b3e" containerName="ovn-config" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.051725 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="36369d86-4106-4626-9771-c63ca46e2b3e" containerName="ovn-config" Feb 02 07:05:02 crc kubenswrapper[4842]: E0202 07:05:02.051753 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0cbe107-ad1a-47aa-9b91-4a08c8b712fb" containerName="mariadb-account-create-update" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.051759 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0cbe107-ad1a-47aa-9b91-4a08c8b712fb" containerName="mariadb-account-create-update" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.051925 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="36369d86-4106-4626-9771-c63ca46e2b3e" containerName="ovn-config" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.051946 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0cbe107-ad1a-47aa-9b91-4a08c8b712fb" containerName="mariadb-account-create-update" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.052835 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.056351 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.067652 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-fn7dr"] Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.215244 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-ovsdbserver-nb\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.215321 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-dns-svc\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.215350 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-dns-swift-storage-0\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.215389 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-config\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.215531 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw2lr\" (UniqueName: \"kubernetes.io/projected/57953a5b-9fe5-49e3-bc39-7ac347467088-kube-api-access-vw2lr\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.215597 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-ovsdbserver-sb\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.318009 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-ovsdbserver-nb\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.318131 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-dns-svc\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.318189 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-dns-swift-storage-0\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.318295 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-config\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.318514 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw2lr\" (UniqueName: \"kubernetes.io/projected/57953a5b-9fe5-49e3-bc39-7ac347467088-kube-api-access-vw2lr\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.318624 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-ovsdbserver-sb\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.319322 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-config\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.319677 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-dns-swift-storage-0\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.320024 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-ovsdbserver-nb\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.320265 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-dns-svc\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.320534 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-ovsdbserver-sb\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.347641 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw2lr\" (UniqueName: \"kubernetes.io/projected/57953a5b-9fe5-49e3-bc39-7ac347467088-kube-api-access-vw2lr\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.371569 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.610577 4842 generic.go:334] "Generic (PLEG): container finished" podID="b8cd42ce-4a62-486b-9571-58d789ca2d38" containerID="f28dfbf8c174cb46df97e4d7d6b844e785a2d8671506e1ebb71b67017e08a6b8" exitCode=0 Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.610644 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7qxb9" event={"ID":"b8cd42ce-4a62-486b-9571-58d789ca2d38","Type":"ContainerDied","Data":"f28dfbf8c174cb46df97e4d7d6b844e785a2d8671506e1ebb71b67017e08a6b8"} Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.676341 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-fn7dr"] Feb 02 07:05:03 crc kubenswrapper[4842]: E0202 07:05:03.056593 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57953a5b_9fe5_49e3_bc39_7ac347467088.slice/crio-e73747c25e1db56069f9ad6b874f439bb35dd785b3f2fd7919c45acbffd10c5f.scope\": RecentStats: unable to find data in memory cache]" Feb 02 07:05:03 crc kubenswrapper[4842]: I0202 07:05:03.619301 4842 generic.go:334] "Generic (PLEG): container finished" podID="57953a5b-9fe5-49e3-bc39-7ac347467088" containerID="e73747c25e1db56069f9ad6b874f439bb35dd785b3f2fd7919c45acbffd10c5f" exitCode=0 Feb 02 07:05:03 crc kubenswrapper[4842]: I0202 07:05:03.620332 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" event={"ID":"57953a5b-9fe5-49e3-bc39-7ac347467088","Type":"ContainerDied","Data":"e73747c25e1db56069f9ad6b874f439bb35dd785b3f2fd7919c45acbffd10c5f"} Feb 02 07:05:03 crc kubenswrapper[4842]: I0202 07:05:03.620357 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" event={"ID":"57953a5b-9fe5-49e3-bc39-7ac347467088","Type":"ContainerStarted","Data":"45616b816ffed6aadd7c2954b933ac19362083c5815ff3769fd5f6861a68956c"} Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.012114 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7qxb9" Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.145262 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-config-data\") pod \"b8cd42ce-4a62-486b-9571-58d789ca2d38\" (UID: \"b8cd42ce-4a62-486b-9571-58d789ca2d38\") " Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.145339 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk4lh\" (UniqueName: \"kubernetes.io/projected/b8cd42ce-4a62-486b-9571-58d789ca2d38-kube-api-access-xk4lh\") pod \"b8cd42ce-4a62-486b-9571-58d789ca2d38\" (UID: \"b8cd42ce-4a62-486b-9571-58d789ca2d38\") " Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.145514 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-combined-ca-bundle\") pod \"b8cd42ce-4a62-486b-9571-58d789ca2d38\" (UID: \"b8cd42ce-4a62-486b-9571-58d789ca2d38\") " Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.145579 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-db-sync-config-data\") pod \"b8cd42ce-4a62-486b-9571-58d789ca2d38\" (UID: \"b8cd42ce-4a62-486b-9571-58d789ca2d38\") " Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.151073 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8cd42ce-4a62-486b-9571-58d789ca2d38-kube-api-access-xk4lh" (OuterVolumeSpecName: "kube-api-access-xk4lh") pod "b8cd42ce-4a62-486b-9571-58d789ca2d38" (UID: "b8cd42ce-4a62-486b-9571-58d789ca2d38"). InnerVolumeSpecName "kube-api-access-xk4lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.151195 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b8cd42ce-4a62-486b-9571-58d789ca2d38" (UID: "b8cd42ce-4a62-486b-9571-58d789ca2d38"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.166477 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8cd42ce-4a62-486b-9571-58d789ca2d38" (UID: "b8cd42ce-4a62-486b-9571-58d789ca2d38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.192047 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-config-data" (OuterVolumeSpecName: "config-data") pod "b8cd42ce-4a62-486b-9571-58d789ca2d38" (UID: "b8cd42ce-4a62-486b-9571-58d789ca2d38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.247904 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.247953 4842 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.247972 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.247990 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xk4lh\" (UniqueName: \"kubernetes.io/projected/b8cd42ce-4a62-486b-9571-58d789ca2d38-kube-api-access-xk4lh\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.630509 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7qxb9" event={"ID":"b8cd42ce-4a62-486b-9571-58d789ca2d38","Type":"ContainerDied","Data":"6be05ab16b17ac589bed2256313d7469b8679adc5a207e3a3668b1acb8265f52"} Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.630878 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6be05ab16b17ac589bed2256313d7469b8679adc5a207e3a3668b1acb8265f52" Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.630828 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7qxb9" Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.633896 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" event={"ID":"57953a5b-9fe5-49e3-bc39-7ac347467088","Type":"ContainerStarted","Data":"3fad7ed135583a1d0cc10f740da8be24965e39c32bf4bc26461df808806e508c"} Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.634108 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.667126 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" podStartSLOduration=2.667100699 podStartE2EDuration="2.667100699s" podCreationTimestamp="2026-02-02 07:05:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:04.662559687 +0000 UTC m=+1130.039827639" watchObservedRunningTime="2026-02-02 07:05:04.667100699 +0000 UTC m=+1130.044368651" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.024944 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-fn7dr"] Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.047717 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-h4x5j"] Feb 02 07:05:05 crc kubenswrapper[4842]: E0202 07:05:05.048055 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8cd42ce-4a62-486b-9571-58d789ca2d38" containerName="glance-db-sync" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.048071 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8cd42ce-4a62-486b-9571-58d789ca2d38" containerName="glance-db-sync" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.048240 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8cd42ce-4a62-486b-9571-58d789ca2d38" containerName="glance-db-sync" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.048979 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.068115 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-h4x5j"] Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.164752 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-ovsdbserver-nb\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.164796 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-ovsdbserver-sb\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.164844 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-dns-swift-storage-0\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.164924 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-config\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.164999 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-dns-svc\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.165235 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dmt2\" (UniqueName: \"kubernetes.io/projected/e793f6a1-ed49-496a-af57-84d696daf728-kube-api-access-2dmt2\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.267050 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-ovsdbserver-nb\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.267096 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-ovsdbserver-sb\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.267135 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-dns-swift-storage-0\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.267155 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-config\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.267174 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-dns-svc\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.267231 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dmt2\" (UniqueName: \"kubernetes.io/projected/e793f6a1-ed49-496a-af57-84d696daf728-kube-api-access-2dmt2\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.267928 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-ovsdbserver-nb\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.268008 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-config\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.268128 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-dns-svc\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.268832 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-ovsdbserver-sb\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.269026 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-dns-swift-storage-0\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.287449 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dmt2\" (UniqueName: \"kubernetes.io/projected/e793f6a1-ed49-496a-af57-84d696daf728-kube-api-access-2dmt2\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.367029 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: W0202 07:05:05.648390 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode793f6a1_ed49_496a_af57_84d696daf728.slice/crio-b3ac1bf771ea13c21ef3016b99265dd8b3157a19cb4d0bcd95a7fc3cee59344d WatchSource:0}: Error finding container b3ac1bf771ea13c21ef3016b99265dd8b3157a19cb4d0bcd95a7fc3cee59344d: Status 404 returned error can't find the container with id b3ac1bf771ea13c21ef3016b99265dd8b3157a19cb4d0bcd95a7fc3cee59344d Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.652328 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-h4x5j"] Feb 02 07:05:06 crc kubenswrapper[4842]: I0202 07:05:06.667077 4842 generic.go:334] "Generic (PLEG): container finished" podID="e793f6a1-ed49-496a-af57-84d696daf728" containerID="dca3dac891364e01eb6e12794cb5bb79081189c188f045ba72387b730d26feaa" exitCode=0 Feb 02 07:05:06 crc kubenswrapper[4842]: I0202 07:05:06.667787 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" podUID="57953a5b-9fe5-49e3-bc39-7ac347467088" containerName="dnsmasq-dns" containerID="cri-o://3fad7ed135583a1d0cc10f740da8be24965e39c32bf4bc26461df808806e508c" gracePeriod=10 Feb 02 07:05:06 crc kubenswrapper[4842]: I0202 07:05:06.670560 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" event={"ID":"e793f6a1-ed49-496a-af57-84d696daf728","Type":"ContainerDied","Data":"dca3dac891364e01eb6e12794cb5bb79081189c188f045ba72387b730d26feaa"} Feb 02 07:05:06 crc kubenswrapper[4842]: I0202 07:05:06.670608 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" event={"ID":"e793f6a1-ed49-496a-af57-84d696daf728","Type":"ContainerStarted","Data":"b3ac1bf771ea13c21ef3016b99265dd8b3157a19cb4d0bcd95a7fc3cee59344d"} Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.072291 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.196174 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-dns-swift-storage-0\") pod \"57953a5b-9fe5-49e3-bc39-7ac347467088\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.196268 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-ovsdbserver-nb\") pod \"57953a5b-9fe5-49e3-bc39-7ac347467088\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.196364 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-ovsdbserver-sb\") pod \"57953a5b-9fe5-49e3-bc39-7ac347467088\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.196440 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-config\") pod \"57953a5b-9fe5-49e3-bc39-7ac347467088\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.196467 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw2lr\" (UniqueName: \"kubernetes.io/projected/57953a5b-9fe5-49e3-bc39-7ac347467088-kube-api-access-vw2lr\") pod \"57953a5b-9fe5-49e3-bc39-7ac347467088\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.196488 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-dns-svc\") pod \"57953a5b-9fe5-49e3-bc39-7ac347467088\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.201719 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57953a5b-9fe5-49e3-bc39-7ac347467088-kube-api-access-vw2lr" (OuterVolumeSpecName: "kube-api-access-vw2lr") pod "57953a5b-9fe5-49e3-bc39-7ac347467088" (UID: "57953a5b-9fe5-49e3-bc39-7ac347467088"). InnerVolumeSpecName "kube-api-access-vw2lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.233115 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "57953a5b-9fe5-49e3-bc39-7ac347467088" (UID: "57953a5b-9fe5-49e3-bc39-7ac347467088"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.237512 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "57953a5b-9fe5-49e3-bc39-7ac347467088" (UID: "57953a5b-9fe5-49e3-bc39-7ac347467088"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.239525 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "57953a5b-9fe5-49e3-bc39-7ac347467088" (UID: "57953a5b-9fe5-49e3-bc39-7ac347467088"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.252885 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-config" (OuterVolumeSpecName: "config") pod "57953a5b-9fe5-49e3-bc39-7ac347467088" (UID: "57953a5b-9fe5-49e3-bc39-7ac347467088"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.258172 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "57953a5b-9fe5-49e3-bc39-7ac347467088" (UID: "57953a5b-9fe5-49e3-bc39-7ac347467088"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.297738 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.297775 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.297787 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.297796 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vw2lr\" (UniqueName: \"kubernetes.io/projected/57953a5b-9fe5-49e3-bc39-7ac347467088-kube-api-access-vw2lr\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.297806 4842 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.297815 4842 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.676645 4842 generic.go:334] "Generic (PLEG): container finished" podID="57953a5b-9fe5-49e3-bc39-7ac347467088" containerID="3fad7ed135583a1d0cc10f740da8be24965e39c32bf4bc26461df808806e508c" exitCode=0 Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.676696 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" event={"ID":"57953a5b-9fe5-49e3-bc39-7ac347467088","Type":"ContainerDied","Data":"3fad7ed135583a1d0cc10f740da8be24965e39c32bf4bc26461df808806e508c"} Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.677088 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" event={"ID":"57953a5b-9fe5-49e3-bc39-7ac347467088","Type":"ContainerDied","Data":"45616b816ffed6aadd7c2954b933ac19362083c5815ff3769fd5f6861a68956c"} Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.677111 4842 scope.go:117] "RemoveContainer" containerID="3fad7ed135583a1d0cc10f740da8be24965e39c32bf4bc26461df808806e508c" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.676748 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.680326 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" event={"ID":"e793f6a1-ed49-496a-af57-84d696daf728","Type":"ContainerStarted","Data":"b3a7c436e2e8d2b98b1b382d46734ec10fcb3fb8ee566aaba25f0dda55dc5702"} Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.680991 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.702456 4842 scope.go:117] "RemoveContainer" containerID="e73747c25e1db56069f9ad6b874f439bb35dd785b3f2fd7919c45acbffd10c5f" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.708291 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" podStartSLOduration=2.708272653 podStartE2EDuration="2.708272653s" podCreationTimestamp="2026-02-02 07:05:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:07.705606707 +0000 UTC m=+1133.082874659" watchObservedRunningTime="2026-02-02 07:05:07.708272653 +0000 UTC m=+1133.085540605" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.730273 4842 scope.go:117] "RemoveContainer" containerID="3fad7ed135583a1d0cc10f740da8be24965e39c32bf4bc26461df808806e508c" Feb 02 07:05:07 crc kubenswrapper[4842]: E0202 07:05:07.730822 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fad7ed135583a1d0cc10f740da8be24965e39c32bf4bc26461df808806e508c\": container with ID starting with 3fad7ed135583a1d0cc10f740da8be24965e39c32bf4bc26461df808806e508c not found: ID does not exist" containerID="3fad7ed135583a1d0cc10f740da8be24965e39c32bf4bc26461df808806e508c" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.730865 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fad7ed135583a1d0cc10f740da8be24965e39c32bf4bc26461df808806e508c"} err="failed to get container status \"3fad7ed135583a1d0cc10f740da8be24965e39c32bf4bc26461df808806e508c\": rpc error: code = NotFound desc = could not find container \"3fad7ed135583a1d0cc10f740da8be24965e39c32bf4bc26461df808806e508c\": container with ID starting with 3fad7ed135583a1d0cc10f740da8be24965e39c32bf4bc26461df808806e508c not found: ID does not exist" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.730892 4842 scope.go:117] "RemoveContainer" containerID="e73747c25e1db56069f9ad6b874f439bb35dd785b3f2fd7919c45acbffd10c5f" Feb 02 07:05:07 crc kubenswrapper[4842]: E0202 07:05:07.731244 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e73747c25e1db56069f9ad6b874f439bb35dd785b3f2fd7919c45acbffd10c5f\": container with ID starting with e73747c25e1db56069f9ad6b874f439bb35dd785b3f2fd7919c45acbffd10c5f not found: ID does not exist" containerID="e73747c25e1db56069f9ad6b874f439bb35dd785b3f2fd7919c45acbffd10c5f" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.731265 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e73747c25e1db56069f9ad6b874f439bb35dd785b3f2fd7919c45acbffd10c5f"} err="failed to get container status \"e73747c25e1db56069f9ad6b874f439bb35dd785b3f2fd7919c45acbffd10c5f\": rpc error: code = NotFound desc = could not find container \"e73747c25e1db56069f9ad6b874f439bb35dd785b3f2fd7919c45acbffd10c5f\": container with ID starting with e73747c25e1db56069f9ad6b874f439bb35dd785b3f2fd7919c45acbffd10c5f not found: ID does not exist" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.732489 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-fn7dr"] Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.739405 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-fn7dr"] Feb 02 07:05:09 crc kubenswrapper[4842]: I0202 07:05:09.452347 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57953a5b-9fe5-49e3-bc39-7ac347467088" path="/var/lib/kubelet/pods/57953a5b-9fe5-49e3-bc39-7ac347467088/volumes" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.171548 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.666043 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-8rdwx"] Feb 02 07:05:14 crc kubenswrapper[4842]: E0202 07:05:14.666602 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57953a5b-9fe5-49e3-bc39-7ac347467088" containerName="dnsmasq-dns" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.666633 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="57953a5b-9fe5-49e3-bc39-7ac347467088" containerName="dnsmasq-dns" Feb 02 07:05:14 crc kubenswrapper[4842]: E0202 07:05:14.666657 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57953a5b-9fe5-49e3-bc39-7ac347467088" containerName="init" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.666670 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="57953a5b-9fe5-49e3-bc39-7ac347467088" containerName="init" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.666961 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="57953a5b-9fe5-49e3-bc39-7ac347467088" containerName="dnsmasq-dns" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.667920 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8rdwx" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.680696 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8rdwx"] Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.733576 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6418a243-5699-42a3-8fab-d65c530c9951-operator-scripts\") pod \"barbican-db-create-8rdwx\" (UID: \"6418a243-5699-42a3-8fab-d65c530c9951\") " pod="openstack/barbican-db-create-8rdwx" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.733664 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwr6v\" (UniqueName: \"kubernetes.io/projected/6418a243-5699-42a3-8fab-d65c530c9951-kube-api-access-bwr6v\") pod \"barbican-db-create-8rdwx\" (UID: \"6418a243-5699-42a3-8fab-d65c530c9951\") " pod="openstack/barbican-db-create-8rdwx" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.771331 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-hhd7d"] Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.773014 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hhd7d" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.777661 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-8e42-account-create-update-mtd79"] Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.778689 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8e42-account-create-update-mtd79" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.779916 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.794769 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-hhd7d"] Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.816741 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8e42-account-create-update-mtd79"] Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.836184 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27c72b5c-16bb-4404-8c00-6b37ed7d9b47-operator-scripts\") pod \"cinder-db-create-hhd7d\" (UID: \"27c72b5c-16bb-4404-8c00-6b37ed7d9b47\") " pod="openstack/cinder-db-create-hhd7d" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.836286 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwr6v\" (UniqueName: \"kubernetes.io/projected/6418a243-5699-42a3-8fab-d65c530c9951-kube-api-access-bwr6v\") pod \"barbican-db-create-8rdwx\" (UID: \"6418a243-5699-42a3-8fab-d65c530c9951\") " pod="openstack/barbican-db-create-8rdwx" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.836340 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs2cf\" (UniqueName: \"kubernetes.io/projected/d82484f3-c883-4c12-8ca1-6de8ead67139-kube-api-access-rs2cf\") pod \"barbican-8e42-account-create-update-mtd79\" (UID: \"d82484f3-c883-4c12-8ca1-6de8ead67139\") " pod="openstack/barbican-8e42-account-create-update-mtd79" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.836406 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d82484f3-c883-4c12-8ca1-6de8ead67139-operator-scripts\") pod \"barbican-8e42-account-create-update-mtd79\" (UID: \"d82484f3-c883-4c12-8ca1-6de8ead67139\") " pod="openstack/barbican-8e42-account-create-update-mtd79" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.836473 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhggz\" (UniqueName: \"kubernetes.io/projected/27c72b5c-16bb-4404-8c00-6b37ed7d9b47-kube-api-access-dhggz\") pod \"cinder-db-create-hhd7d\" (UID: \"27c72b5c-16bb-4404-8c00-6b37ed7d9b47\") " pod="openstack/cinder-db-create-hhd7d" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.836512 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6418a243-5699-42a3-8fab-d65c530c9951-operator-scripts\") pod \"barbican-db-create-8rdwx\" (UID: \"6418a243-5699-42a3-8fab-d65c530c9951\") " pod="openstack/barbican-db-create-8rdwx" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.837442 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6418a243-5699-42a3-8fab-d65c530c9951-operator-scripts\") pod \"barbican-db-create-8rdwx\" (UID: \"6418a243-5699-42a3-8fab-d65c530c9951\") " pod="openstack/barbican-db-create-8rdwx" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.862678 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwr6v\" (UniqueName: \"kubernetes.io/projected/6418a243-5699-42a3-8fab-d65c530c9951-kube-api-access-bwr6v\") pod \"barbican-db-create-8rdwx\" (UID: \"6418a243-5699-42a3-8fab-d65c530c9951\") " pod="openstack/barbican-db-create-8rdwx" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.863479 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.872012 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-716d-account-create-update-ft5kt"] Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.873117 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-716d-account-create-update-ft5kt" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.878252 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.881671 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-716d-account-create-update-ft5kt"] Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.937969 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27c72b5c-16bb-4404-8c00-6b37ed7d9b47-operator-scripts\") pod \"cinder-db-create-hhd7d\" (UID: \"27c72b5c-16bb-4404-8c00-6b37ed7d9b47\") " pod="openstack/cinder-db-create-hhd7d" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.938084 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs2cf\" (UniqueName: \"kubernetes.io/projected/d82484f3-c883-4c12-8ca1-6de8ead67139-kube-api-access-rs2cf\") pod \"barbican-8e42-account-create-update-mtd79\" (UID: \"d82484f3-c883-4c12-8ca1-6de8ead67139\") " pod="openstack/barbican-8e42-account-create-update-mtd79" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.938143 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmqkl\" (UniqueName: \"kubernetes.io/projected/f1ffaeb5-5dc3-4ead-8b43-701f81a8c965-kube-api-access-rmqkl\") pod \"cinder-716d-account-create-update-ft5kt\" (UID: \"f1ffaeb5-5dc3-4ead-8b43-701f81a8c965\") " pod="openstack/cinder-716d-account-create-update-ft5kt" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.938163 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d82484f3-c883-4c12-8ca1-6de8ead67139-operator-scripts\") pod \"barbican-8e42-account-create-update-mtd79\" (UID: \"d82484f3-c883-4c12-8ca1-6de8ead67139\") " pod="openstack/barbican-8e42-account-create-update-mtd79" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.938194 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1ffaeb5-5dc3-4ead-8b43-701f81a8c965-operator-scripts\") pod \"cinder-716d-account-create-update-ft5kt\" (UID: \"f1ffaeb5-5dc3-4ead-8b43-701f81a8c965\") " pod="openstack/cinder-716d-account-create-update-ft5kt" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.938323 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhggz\" (UniqueName: \"kubernetes.io/projected/27c72b5c-16bb-4404-8c00-6b37ed7d9b47-kube-api-access-dhggz\") pod \"cinder-db-create-hhd7d\" (UID: \"27c72b5c-16bb-4404-8c00-6b37ed7d9b47\") " pod="openstack/cinder-db-create-hhd7d" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.938767 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27c72b5c-16bb-4404-8c00-6b37ed7d9b47-operator-scripts\") pod \"cinder-db-create-hhd7d\" (UID: \"27c72b5c-16bb-4404-8c00-6b37ed7d9b47\") " pod="openstack/cinder-db-create-hhd7d" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.939331 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d82484f3-c883-4c12-8ca1-6de8ead67139-operator-scripts\") pod \"barbican-8e42-account-create-update-mtd79\" (UID: \"d82484f3-c883-4c12-8ca1-6de8ead67139\") " pod="openstack/barbican-8e42-account-create-update-mtd79" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.967390 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs2cf\" (UniqueName: \"kubernetes.io/projected/d82484f3-c883-4c12-8ca1-6de8ead67139-kube-api-access-rs2cf\") pod \"barbican-8e42-account-create-update-mtd79\" (UID: \"d82484f3-c883-4c12-8ca1-6de8ead67139\") " pod="openstack/barbican-8e42-account-create-update-mtd79" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.968569 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhggz\" (UniqueName: \"kubernetes.io/projected/27c72b5c-16bb-4404-8c00-6b37ed7d9b47-kube-api-access-dhggz\") pod \"cinder-db-create-hhd7d\" (UID: \"27c72b5c-16bb-4404-8c00-6b37ed7d9b47\") " pod="openstack/cinder-db-create-hhd7d" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.976335 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-8p487"] Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.977326 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8p487" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.989044 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-8p487"] Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.989402 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8rdwx" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.038164 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-z87kx"] Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.039468 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-z87kx" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.040145 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1ffaeb5-5dc3-4ead-8b43-701f81a8c965-operator-scripts\") pod \"cinder-716d-account-create-update-ft5kt\" (UID: \"f1ffaeb5-5dc3-4ead-8b43-701f81a8c965\") " pod="openstack/cinder-716d-account-create-update-ft5kt" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.046043 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1ffaeb5-5dc3-4ead-8b43-701f81a8c965-operator-scripts\") pod \"cinder-716d-account-create-update-ft5kt\" (UID: \"f1ffaeb5-5dc3-4ead-8b43-701f81a8c965\") " pod="openstack/cinder-716d-account-create-update-ft5kt" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.046443 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.040240 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpj84\" (UniqueName: \"kubernetes.io/projected/9c852e5a-26fe-4905-8483-4619c280f9c0-kube-api-access-mpj84\") pod \"neutron-db-create-8p487\" (UID: \"9c852e5a-26fe-4905-8483-4619c280f9c0\") " pod="openstack/neutron-db-create-8p487" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.047165 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c852e5a-26fe-4905-8483-4619c280f9c0-operator-scripts\") pod \"neutron-db-create-8p487\" (UID: \"9c852e5a-26fe-4905-8483-4619c280f9c0\") " pod="openstack/neutron-db-create-8p487" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.047205 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmqkl\" (UniqueName: \"kubernetes.io/projected/f1ffaeb5-5dc3-4ead-8b43-701f81a8c965-kube-api-access-rmqkl\") pod \"cinder-716d-account-create-update-ft5kt\" (UID: \"f1ffaeb5-5dc3-4ead-8b43-701f81a8c965\") " pod="openstack/cinder-716d-account-create-update-ft5kt" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.047641 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6drft" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.047802 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.048387 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.056817 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-z87kx"] Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.071192 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmqkl\" (UniqueName: \"kubernetes.io/projected/f1ffaeb5-5dc3-4ead-8b43-701f81a8c965-kube-api-access-rmqkl\") pod \"cinder-716d-account-create-update-ft5kt\" (UID: \"f1ffaeb5-5dc3-4ead-8b43-701f81a8c965\") " pod="openstack/cinder-716d-account-create-update-ft5kt" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.109728 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hhd7d" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.112907 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-bfdd-account-create-update-rws4k"] Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.113874 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bfdd-account-create-update-rws4k" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.114202 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8e42-account-create-update-mtd79" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.118270 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.123069 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bfdd-account-create-update-rws4k"] Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.148507 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c852e5a-26fe-4905-8483-4619c280f9c0-operator-scripts\") pod \"neutron-db-create-8p487\" (UID: \"9c852e5a-26fe-4905-8483-4619c280f9c0\") " pod="openstack/neutron-db-create-8p487" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.148561 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpj84\" (UniqueName: \"kubernetes.io/projected/9c852e5a-26fe-4905-8483-4619c280f9c0-kube-api-access-mpj84\") pod \"neutron-db-create-8p487\" (UID: \"9c852e5a-26fe-4905-8483-4619c280f9c0\") " pod="openstack/neutron-db-create-8p487" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.148595 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b89146d-a545-4525-8744-723e0d9248b5-combined-ca-bundle\") pod \"keystone-db-sync-z87kx\" (UID: \"3b89146d-a545-4525-8744-723e0d9248b5\") " pod="openstack/keystone-db-sync-z87kx" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.148632 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c51cea52-ce54-4855-9d4c-97817c4cc6b0-operator-scripts\") pod \"neutron-bfdd-account-create-update-rws4k\" (UID: \"c51cea52-ce54-4855-9d4c-97817c4cc6b0\") " pod="openstack/neutron-bfdd-account-create-update-rws4k" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.148654 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfv75\" (UniqueName: \"kubernetes.io/projected/c51cea52-ce54-4855-9d4c-97817c4cc6b0-kube-api-access-jfv75\") pod \"neutron-bfdd-account-create-update-rws4k\" (UID: \"c51cea52-ce54-4855-9d4c-97817c4cc6b0\") " pod="openstack/neutron-bfdd-account-create-update-rws4k" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.148670 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b89146d-a545-4525-8744-723e0d9248b5-config-data\") pod \"keystone-db-sync-z87kx\" (UID: \"3b89146d-a545-4525-8744-723e0d9248b5\") " pod="openstack/keystone-db-sync-z87kx" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.148716 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpxv4\" (UniqueName: \"kubernetes.io/projected/3b89146d-a545-4525-8744-723e0d9248b5-kube-api-access-xpxv4\") pod \"keystone-db-sync-z87kx\" (UID: \"3b89146d-a545-4525-8744-723e0d9248b5\") " pod="openstack/keystone-db-sync-z87kx" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.149271 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c852e5a-26fe-4905-8483-4619c280f9c0-operator-scripts\") pod \"neutron-db-create-8p487\" (UID: \"9c852e5a-26fe-4905-8483-4619c280f9c0\") " pod="openstack/neutron-db-create-8p487" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.173060 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpj84\" (UniqueName: \"kubernetes.io/projected/9c852e5a-26fe-4905-8483-4619c280f9c0-kube-api-access-mpj84\") pod \"neutron-db-create-8p487\" (UID: \"9c852e5a-26fe-4905-8483-4619c280f9c0\") " pod="openstack/neutron-db-create-8p487" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.223111 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-716d-account-create-update-ft5kt" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.253550 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b89146d-a545-4525-8744-723e0d9248b5-combined-ca-bundle\") pod \"keystone-db-sync-z87kx\" (UID: \"3b89146d-a545-4525-8744-723e0d9248b5\") " pod="openstack/keystone-db-sync-z87kx" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.253804 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c51cea52-ce54-4855-9d4c-97817c4cc6b0-operator-scripts\") pod \"neutron-bfdd-account-create-update-rws4k\" (UID: \"c51cea52-ce54-4855-9d4c-97817c4cc6b0\") " pod="openstack/neutron-bfdd-account-create-update-rws4k" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.253840 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfv75\" (UniqueName: \"kubernetes.io/projected/c51cea52-ce54-4855-9d4c-97817c4cc6b0-kube-api-access-jfv75\") pod \"neutron-bfdd-account-create-update-rws4k\" (UID: \"c51cea52-ce54-4855-9d4c-97817c4cc6b0\") " pod="openstack/neutron-bfdd-account-create-update-rws4k" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.253866 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b89146d-a545-4525-8744-723e0d9248b5-config-data\") pod \"keystone-db-sync-z87kx\" (UID: \"3b89146d-a545-4525-8744-723e0d9248b5\") " pod="openstack/keystone-db-sync-z87kx" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.253949 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpxv4\" (UniqueName: \"kubernetes.io/projected/3b89146d-a545-4525-8744-723e0d9248b5-kube-api-access-xpxv4\") pod \"keystone-db-sync-z87kx\" (UID: \"3b89146d-a545-4525-8744-723e0d9248b5\") " pod="openstack/keystone-db-sync-z87kx" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.258731 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c51cea52-ce54-4855-9d4c-97817c4cc6b0-operator-scripts\") pod \"neutron-bfdd-account-create-update-rws4k\" (UID: \"c51cea52-ce54-4855-9d4c-97817c4cc6b0\") " pod="openstack/neutron-bfdd-account-create-update-rws4k" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.271622 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b89146d-a545-4525-8744-723e0d9248b5-config-data\") pod \"keystone-db-sync-z87kx\" (UID: \"3b89146d-a545-4525-8744-723e0d9248b5\") " pod="openstack/keystone-db-sync-z87kx" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.284587 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b89146d-a545-4525-8744-723e0d9248b5-combined-ca-bundle\") pod \"keystone-db-sync-z87kx\" (UID: \"3b89146d-a545-4525-8744-723e0d9248b5\") " pod="openstack/keystone-db-sync-z87kx" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.293308 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfv75\" (UniqueName: \"kubernetes.io/projected/c51cea52-ce54-4855-9d4c-97817c4cc6b0-kube-api-access-jfv75\") pod \"neutron-bfdd-account-create-update-rws4k\" (UID: \"c51cea52-ce54-4855-9d4c-97817c4cc6b0\") " pod="openstack/neutron-bfdd-account-create-update-rws4k" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.294125 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpxv4\" (UniqueName: \"kubernetes.io/projected/3b89146d-a545-4525-8744-723e0d9248b5-kube-api-access-xpxv4\") pod \"keystone-db-sync-z87kx\" (UID: \"3b89146d-a545-4525-8744-723e0d9248b5\") " pod="openstack/keystone-db-sync-z87kx" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.368608 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.376912 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8rdwx"] Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.431588 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8p487" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.451509 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6drft" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.458290 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-z87kx" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.458568 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bfdd-account-create-update-rws4k" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.460626 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-hqszm"] Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.463825 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" podUID="f57fef97-6ad3-4b54-9859-2b33853f7f6d" containerName="dnsmasq-dns" containerID="cri-o://f0a94a75b63c1a8041b919515cc44d86376bbe513e93d1848bcd51190a1482d3" gracePeriod=10 Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.774522 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8e42-account-create-update-mtd79"] Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.789696 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8rdwx" event={"ID":"6418a243-5699-42a3-8fab-d65c530c9951","Type":"ContainerStarted","Data":"a5e957fb74580066bf78b8278f65ee1b3e13330434bca538903d73afe512a090"} Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.789755 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8rdwx" event={"ID":"6418a243-5699-42a3-8fab-d65c530c9951","Type":"ContainerStarted","Data":"28a49c26ed5983df61dd478607c39fd13bcfdd80f726d093a1fa96092771df86"} Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.807430 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.810709 4842 generic.go:334] "Generic (PLEG): container finished" podID="f57fef97-6ad3-4b54-9859-2b33853f7f6d" containerID="f0a94a75b63c1a8041b919515cc44d86376bbe513e93d1848bcd51190a1482d3" exitCode=0 Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.810770 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" event={"ID":"f57fef97-6ad3-4b54-9859-2b33853f7f6d","Type":"ContainerDied","Data":"f0a94a75b63c1a8041b919515cc44d86376bbe513e93d1848bcd51190a1482d3"} Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:15.831847 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-8rdwx" podStartSLOduration=1.83182449 podStartE2EDuration="1.83182449s" podCreationTimestamp="2026-02-02 07:05:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:15.817262282 +0000 UTC m=+1141.194530204" watchObservedRunningTime="2026-02-02 07:05:15.83182449 +0000 UTC m=+1141.209092402" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:15.865884 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-hhd7d"] Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.200255 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-716d-account-create-update-ft5kt"] Feb 02 07:05:16 crc kubenswrapper[4842]: W0202 07:05:16.232274 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1ffaeb5_5dc3_4ead_8b43_701f81a8c965.slice/crio-c701c71404ce89e1ea6b0999f0d53d4e8eb458f082afccd142d6f68dc34c401f WatchSource:0}: Error finding container c701c71404ce89e1ea6b0999f0d53d4e8eb458f082afccd142d6f68dc34c401f: Status 404 returned error can't find the container with id c701c71404ce89e1ea6b0999f0d53d4e8eb458f082afccd142d6f68dc34c401f Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.251909 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.370223 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-8p487"] Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.392734 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bfdd-account-create-update-rws4k"] Feb 02 07:05:16 crc kubenswrapper[4842]: W0202 07:05:16.481982 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c852e5a_26fe_4905_8483_4619c280f9c0.slice/crio-aa5e8244172d22df4dd0e5e74f8d5b534773098b946d823ca2d7f01ebe48feae WatchSource:0}: Error finding container aa5e8244172d22df4dd0e5e74f8d5b534773098b946d823ca2d7f01ebe48feae: Status 404 returned error can't find the container with id aa5e8244172d22df4dd0e5e74f8d5b534773098b946d823ca2d7f01ebe48feae Feb 02 07:05:16 crc kubenswrapper[4842]: W0202 07:05:16.482400 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc51cea52_ce54_4855_9d4c_97817c4cc6b0.slice/crio-4007cf1b199fca1bb0e11ca4dcb702cf826f20a774a65d870161c3df8f2c9437 WatchSource:0}: Error finding container 4007cf1b199fca1bb0e11ca4dcb702cf826f20a774a65d870161c3df8f2c9437: Status 404 returned error can't find the container with id 4007cf1b199fca1bb0e11ca4dcb702cf826f20a774a65d870161c3df8f2c9437 Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.484106 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.488469 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.580614 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gcnp\" (UniqueName: \"kubernetes.io/projected/f57fef97-6ad3-4b54-9859-2b33853f7f6d-kube-api-access-5gcnp\") pod \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.580986 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-ovsdbserver-nb\") pod \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.581027 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-config\") pod \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.581143 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-ovsdbserver-sb\") pod \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.581207 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-dns-svc\") pod \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.593978 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f57fef97-6ad3-4b54-9859-2b33853f7f6d-kube-api-access-5gcnp" (OuterVolumeSpecName: "kube-api-access-5gcnp") pod "f57fef97-6ad3-4b54-9859-2b33853f7f6d" (UID: "f57fef97-6ad3-4b54-9859-2b33853f7f6d"). InnerVolumeSpecName "kube-api-access-5gcnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.640003 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-z87kx"] Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.683483 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gcnp\" (UniqueName: \"kubernetes.io/projected/f57fef97-6ad3-4b54-9859-2b33853f7f6d-kube-api-access-5gcnp\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.702939 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f57fef97-6ad3-4b54-9859-2b33853f7f6d" (UID: "f57fef97-6ad3-4b54-9859-2b33853f7f6d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.705699 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-config" (OuterVolumeSpecName: "config") pod "f57fef97-6ad3-4b54-9859-2b33853f7f6d" (UID: "f57fef97-6ad3-4b54-9859-2b33853f7f6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.711178 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f57fef97-6ad3-4b54-9859-2b33853f7f6d" (UID: "f57fef97-6ad3-4b54-9859-2b33853f7f6d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.711413 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f57fef97-6ad3-4b54-9859-2b33853f7f6d" (UID: "f57fef97-6ad3-4b54-9859-2b33853f7f6d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.785069 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.785107 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.785116 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.785124 4842 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.825139 4842 generic.go:334] "Generic (PLEG): container finished" podID="f1ffaeb5-5dc3-4ead-8b43-701f81a8c965" containerID="17bb3eec7905f7b5df5e9c3137f1a5db8fc820e99f038ef4113064b8ca0bb24d" exitCode=0 Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.825359 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-716d-account-create-update-ft5kt" event={"ID":"f1ffaeb5-5dc3-4ead-8b43-701f81a8c965","Type":"ContainerDied","Data":"17bb3eec7905f7b5df5e9c3137f1a5db8fc820e99f038ef4113064b8ca0bb24d"} Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.825471 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-716d-account-create-update-ft5kt" event={"ID":"f1ffaeb5-5dc3-4ead-8b43-701f81a8c965","Type":"ContainerStarted","Data":"c701c71404ce89e1ea6b0999f0d53d4e8eb458f082afccd142d6f68dc34c401f"} Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.826951 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bfdd-account-create-update-rws4k" event={"ID":"c51cea52-ce54-4855-9d4c-97817c4cc6b0","Type":"ContainerStarted","Data":"326e1290c30749283ca2bf9608aa395736ad83c0971c17e5e2948a81ffff16c0"} Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.826994 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bfdd-account-create-update-rws4k" event={"ID":"c51cea52-ce54-4855-9d4c-97817c4cc6b0","Type":"ContainerStarted","Data":"4007cf1b199fca1bb0e11ca4dcb702cf826f20a774a65d870161c3df8f2c9437"} Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.828297 4842 generic.go:334] "Generic (PLEG): container finished" podID="27c72b5c-16bb-4404-8c00-6b37ed7d9b47" containerID="2b38ab8a50c4bfdef3036052e4dbdb50598c007951f872fa5af56a866e47db58" exitCode=0 Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.828354 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hhd7d" event={"ID":"27c72b5c-16bb-4404-8c00-6b37ed7d9b47","Type":"ContainerDied","Data":"2b38ab8a50c4bfdef3036052e4dbdb50598c007951f872fa5af56a866e47db58"} Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.828370 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hhd7d" event={"ID":"27c72b5c-16bb-4404-8c00-6b37ed7d9b47","Type":"ContainerStarted","Data":"3d91b23d6d9b6c109112ab4417aa2315357fa56338dce12c560bf3423a87cb00"} Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.829439 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-z87kx" event={"ID":"3b89146d-a545-4525-8744-723e0d9248b5","Type":"ContainerStarted","Data":"9c624bec2cfab2b93f6c6a45dcd225604c34747efe7f2303db55b6d98511faf5"} Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.831031 4842 generic.go:334] "Generic (PLEG): container finished" podID="d82484f3-c883-4c12-8ca1-6de8ead67139" containerID="185ab6e958e5fc2a5da9e833e3789438b8d16f440f7c53e0467e8ff307a5f7c8" exitCode=0 Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.831101 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8e42-account-create-update-mtd79" event={"ID":"d82484f3-c883-4c12-8ca1-6de8ead67139","Type":"ContainerDied","Data":"185ab6e958e5fc2a5da9e833e3789438b8d16f440f7c53e0467e8ff307a5f7c8"} Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.831123 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8e42-account-create-update-mtd79" event={"ID":"d82484f3-c883-4c12-8ca1-6de8ead67139","Type":"ContainerStarted","Data":"cfe5692a4b77a70b1e8ebbd97f4ff631dfa1ec5b8e9d15783262873cfb83076b"} Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.832238 4842 generic.go:334] "Generic (PLEG): container finished" podID="6418a243-5699-42a3-8fab-d65c530c9951" containerID="a5e957fb74580066bf78b8278f65ee1b3e13330434bca538903d73afe512a090" exitCode=0 Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.832286 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8rdwx" event={"ID":"6418a243-5699-42a3-8fab-d65c530c9951","Type":"ContainerDied","Data":"a5e957fb74580066bf78b8278f65ee1b3e13330434bca538903d73afe512a090"} Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.833328 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-8p487" event={"ID":"9c852e5a-26fe-4905-8483-4619c280f9c0","Type":"ContainerStarted","Data":"1fdc53d1e29c1c53121cfb56667f86dc9ccc9f8da8c68e110eaaab428c59853f"} Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.833352 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-8p487" event={"ID":"9c852e5a-26fe-4905-8483-4619c280f9c0","Type":"ContainerStarted","Data":"aa5e8244172d22df4dd0e5e74f8d5b534773098b946d823ca2d7f01ebe48feae"} Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.835530 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" event={"ID":"f57fef97-6ad3-4b54-9859-2b33853f7f6d","Type":"ContainerDied","Data":"7707ee54a5265cd6f331b436e56fc1213a27c7e80bff860552b4df87b7cb0473"} Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.835562 4842 scope.go:117] "RemoveContainer" containerID="f0a94a75b63c1a8041b919515cc44d86376bbe513e93d1848bcd51190a1482d3" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.835616 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.852657 4842 scope.go:117] "RemoveContainer" containerID="95945828629b93199fdf9c3ec54c43205bcf2d7c6c586860cf34627eab21e480" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.904644 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-8p487" podStartSLOduration=2.904409001 podStartE2EDuration="2.904409001s" podCreationTimestamp="2026-02-02 07:05:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:16.895515882 +0000 UTC m=+1142.272783794" watchObservedRunningTime="2026-02-02 07:05:16.904409001 +0000 UTC m=+1142.281676913" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.921650 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-bfdd-account-create-update-rws4k" podStartSLOduration=1.921634755 podStartE2EDuration="1.921634755s" podCreationTimestamp="2026-02-02 07:05:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:16.913624098 +0000 UTC m=+1142.290892000" watchObservedRunningTime="2026-02-02 07:05:16.921634755 +0000 UTC m=+1142.298902667" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.950749 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-hqszm"] Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.957097 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-hqszm"] Feb 02 07:05:17 crc kubenswrapper[4842]: I0202 07:05:17.446925 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f57fef97-6ad3-4b54-9859-2b33853f7f6d" path="/var/lib/kubelet/pods/f57fef97-6ad3-4b54-9859-2b33853f7f6d/volumes" Feb 02 07:05:17 crc kubenswrapper[4842]: I0202 07:05:17.845152 4842 generic.go:334] "Generic (PLEG): container finished" podID="9c852e5a-26fe-4905-8483-4619c280f9c0" containerID="1fdc53d1e29c1c53121cfb56667f86dc9ccc9f8da8c68e110eaaab428c59853f" exitCode=0 Feb 02 07:05:17 crc kubenswrapper[4842]: I0202 07:05:17.845234 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-8p487" event={"ID":"9c852e5a-26fe-4905-8483-4619c280f9c0","Type":"ContainerDied","Data":"1fdc53d1e29c1c53121cfb56667f86dc9ccc9f8da8c68e110eaaab428c59853f"} Feb 02 07:05:17 crc kubenswrapper[4842]: I0202 07:05:17.849850 4842 generic.go:334] "Generic (PLEG): container finished" podID="c51cea52-ce54-4855-9d4c-97817c4cc6b0" containerID="326e1290c30749283ca2bf9608aa395736ad83c0971c17e5e2948a81ffff16c0" exitCode=0 Feb 02 07:05:17 crc kubenswrapper[4842]: I0202 07:05:17.849989 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bfdd-account-create-update-rws4k" event={"ID":"c51cea52-ce54-4855-9d4c-97817c4cc6b0","Type":"ContainerDied","Data":"326e1290c30749283ca2bf9608aa395736ad83c0971c17e5e2948a81ffff16c0"} Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.361423 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8rdwx" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.366662 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8e42-account-create-update-mtd79" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.371646 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-716d-account-create-update-ft5kt" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.383697 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hhd7d" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.508027 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27c72b5c-16bb-4404-8c00-6b37ed7d9b47-operator-scripts\") pod \"27c72b5c-16bb-4404-8c00-6b37ed7d9b47\" (UID: \"27c72b5c-16bb-4404-8c00-6b37ed7d9b47\") " Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.508084 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhggz\" (UniqueName: \"kubernetes.io/projected/27c72b5c-16bb-4404-8c00-6b37ed7d9b47-kube-api-access-dhggz\") pod \"27c72b5c-16bb-4404-8c00-6b37ed7d9b47\" (UID: \"27c72b5c-16bb-4404-8c00-6b37ed7d9b47\") " Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.508138 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwr6v\" (UniqueName: \"kubernetes.io/projected/6418a243-5699-42a3-8fab-d65c530c9951-kube-api-access-bwr6v\") pod \"6418a243-5699-42a3-8fab-d65c530c9951\" (UID: \"6418a243-5699-42a3-8fab-d65c530c9951\") " Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.508159 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmqkl\" (UniqueName: \"kubernetes.io/projected/f1ffaeb5-5dc3-4ead-8b43-701f81a8c965-kube-api-access-rmqkl\") pod \"f1ffaeb5-5dc3-4ead-8b43-701f81a8c965\" (UID: \"f1ffaeb5-5dc3-4ead-8b43-701f81a8c965\") " Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.508176 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6418a243-5699-42a3-8fab-d65c530c9951-operator-scripts\") pod \"6418a243-5699-42a3-8fab-d65c530c9951\" (UID: \"6418a243-5699-42a3-8fab-d65c530c9951\") " Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.508230 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d82484f3-c883-4c12-8ca1-6de8ead67139-operator-scripts\") pod \"d82484f3-c883-4c12-8ca1-6de8ead67139\" (UID: \"d82484f3-c883-4c12-8ca1-6de8ead67139\") " Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.508250 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs2cf\" (UniqueName: \"kubernetes.io/projected/d82484f3-c883-4c12-8ca1-6de8ead67139-kube-api-access-rs2cf\") pod \"d82484f3-c883-4c12-8ca1-6de8ead67139\" (UID: \"d82484f3-c883-4c12-8ca1-6de8ead67139\") " Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.508369 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1ffaeb5-5dc3-4ead-8b43-701f81a8c965-operator-scripts\") pod \"f1ffaeb5-5dc3-4ead-8b43-701f81a8c965\" (UID: \"f1ffaeb5-5dc3-4ead-8b43-701f81a8c965\") " Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.509501 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1ffaeb5-5dc3-4ead-8b43-701f81a8c965-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f1ffaeb5-5dc3-4ead-8b43-701f81a8c965" (UID: "f1ffaeb5-5dc3-4ead-8b43-701f81a8c965"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.510038 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6418a243-5699-42a3-8fab-d65c530c9951-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6418a243-5699-42a3-8fab-d65c530c9951" (UID: "6418a243-5699-42a3-8fab-d65c530c9951"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.510082 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d82484f3-c883-4c12-8ca1-6de8ead67139-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d82484f3-c883-4c12-8ca1-6de8ead67139" (UID: "d82484f3-c883-4c12-8ca1-6de8ead67139"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.510128 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27c72b5c-16bb-4404-8c00-6b37ed7d9b47-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "27c72b5c-16bb-4404-8c00-6b37ed7d9b47" (UID: "27c72b5c-16bb-4404-8c00-6b37ed7d9b47"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.514763 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d82484f3-c883-4c12-8ca1-6de8ead67139-kube-api-access-rs2cf" (OuterVolumeSpecName: "kube-api-access-rs2cf") pod "d82484f3-c883-4c12-8ca1-6de8ead67139" (UID: "d82484f3-c883-4c12-8ca1-6de8ead67139"). InnerVolumeSpecName "kube-api-access-rs2cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.514961 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1ffaeb5-5dc3-4ead-8b43-701f81a8c965-kube-api-access-rmqkl" (OuterVolumeSpecName: "kube-api-access-rmqkl") pod "f1ffaeb5-5dc3-4ead-8b43-701f81a8c965" (UID: "f1ffaeb5-5dc3-4ead-8b43-701f81a8c965"). InnerVolumeSpecName "kube-api-access-rmqkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.516620 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27c72b5c-16bb-4404-8c00-6b37ed7d9b47-kube-api-access-dhggz" (OuterVolumeSpecName: "kube-api-access-dhggz") pod "27c72b5c-16bb-4404-8c00-6b37ed7d9b47" (UID: "27c72b5c-16bb-4404-8c00-6b37ed7d9b47"). InnerVolumeSpecName "kube-api-access-dhggz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.519051 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6418a243-5699-42a3-8fab-d65c530c9951-kube-api-access-bwr6v" (OuterVolumeSpecName: "kube-api-access-bwr6v") pod "6418a243-5699-42a3-8fab-d65c530c9951" (UID: "6418a243-5699-42a3-8fab-d65c530c9951"). InnerVolumeSpecName "kube-api-access-bwr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.609882 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1ffaeb5-5dc3-4ead-8b43-701f81a8c965-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.610039 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27c72b5c-16bb-4404-8c00-6b37ed7d9b47-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.610051 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhggz\" (UniqueName: \"kubernetes.io/projected/27c72b5c-16bb-4404-8c00-6b37ed7d9b47-kube-api-access-dhggz\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.610063 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwr6v\" (UniqueName: \"kubernetes.io/projected/6418a243-5699-42a3-8fab-d65c530c9951-kube-api-access-bwr6v\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.610071 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6418a243-5699-42a3-8fab-d65c530c9951-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.610080 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmqkl\" (UniqueName: \"kubernetes.io/projected/f1ffaeb5-5dc3-4ead-8b43-701f81a8c965-kube-api-access-rmqkl\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.610090 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d82484f3-c883-4c12-8ca1-6de8ead67139-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.610099 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rs2cf\" (UniqueName: \"kubernetes.io/projected/d82484f3-c883-4c12-8ca1-6de8ead67139-kube-api-access-rs2cf\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.903369 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hhd7d" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.903359 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hhd7d" event={"ID":"27c72b5c-16bb-4404-8c00-6b37ed7d9b47","Type":"ContainerDied","Data":"3d91b23d6d9b6c109112ab4417aa2315357fa56338dce12c560bf3423a87cb00"} Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.903501 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d91b23d6d9b6c109112ab4417aa2315357fa56338dce12c560bf3423a87cb00" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.906746 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8e42-account-create-update-mtd79" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.906810 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8e42-account-create-update-mtd79" event={"ID":"d82484f3-c883-4c12-8ca1-6de8ead67139","Type":"ContainerDied","Data":"cfe5692a4b77a70b1e8ebbd97f4ff631dfa1ec5b8e9d15783262873cfb83076b"} Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.906930 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfe5692a4b77a70b1e8ebbd97f4ff631dfa1ec5b8e9d15783262873cfb83076b" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.908234 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8rdwx" event={"ID":"6418a243-5699-42a3-8fab-d65c530c9951","Type":"ContainerDied","Data":"28a49c26ed5983df61dd478607c39fd13bcfdd80f726d093a1fa96092771df86"} Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.908272 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28a49c26ed5983df61dd478607c39fd13bcfdd80f726d093a1fa96092771df86" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.908352 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8rdwx" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.909905 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-716d-account-create-update-ft5kt" event={"ID":"f1ffaeb5-5dc3-4ead-8b43-701f81a8c965","Type":"ContainerDied","Data":"c701c71404ce89e1ea6b0999f0d53d4e8eb458f082afccd142d6f68dc34c401f"} Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.909969 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c701c71404ce89e1ea6b0999f0d53d4e8eb458f082afccd142d6f68dc34c401f" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.910044 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-716d-account-create-update-ft5kt" Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.281195 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bfdd-account-create-update-rws4k" Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.290096 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8p487" Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.470785 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c852e5a-26fe-4905-8483-4619c280f9c0-operator-scripts\") pod \"9c852e5a-26fe-4905-8483-4619c280f9c0\" (UID: \"9c852e5a-26fe-4905-8483-4619c280f9c0\") " Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.470869 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpj84\" (UniqueName: \"kubernetes.io/projected/9c852e5a-26fe-4905-8483-4619c280f9c0-kube-api-access-mpj84\") pod \"9c852e5a-26fe-4905-8483-4619c280f9c0\" (UID: \"9c852e5a-26fe-4905-8483-4619c280f9c0\") " Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.470928 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfv75\" (UniqueName: \"kubernetes.io/projected/c51cea52-ce54-4855-9d4c-97817c4cc6b0-kube-api-access-jfv75\") pod \"c51cea52-ce54-4855-9d4c-97817c4cc6b0\" (UID: \"c51cea52-ce54-4855-9d4c-97817c4cc6b0\") " Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.471083 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c51cea52-ce54-4855-9d4c-97817c4cc6b0-operator-scripts\") pod \"c51cea52-ce54-4855-9d4c-97817c4cc6b0\" (UID: \"c51cea52-ce54-4855-9d4c-97817c4cc6b0\") " Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.471580 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c852e5a-26fe-4905-8483-4619c280f9c0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9c852e5a-26fe-4905-8483-4619c280f9c0" (UID: "9c852e5a-26fe-4905-8483-4619c280f9c0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.471861 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c51cea52-ce54-4855-9d4c-97817c4cc6b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c51cea52-ce54-4855-9d4c-97817c4cc6b0" (UID: "c51cea52-ce54-4855-9d4c-97817c4cc6b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.476648 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c852e5a-26fe-4905-8483-4619c280f9c0-kube-api-access-mpj84" (OuterVolumeSpecName: "kube-api-access-mpj84") pod "9c852e5a-26fe-4905-8483-4619c280f9c0" (UID: "9c852e5a-26fe-4905-8483-4619c280f9c0"). InnerVolumeSpecName "kube-api-access-mpj84". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.477553 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c51cea52-ce54-4855-9d4c-97817c4cc6b0-kube-api-access-jfv75" (OuterVolumeSpecName: "kube-api-access-jfv75") pod "c51cea52-ce54-4855-9d4c-97817c4cc6b0" (UID: "c51cea52-ce54-4855-9d4c-97817c4cc6b0"). InnerVolumeSpecName "kube-api-access-jfv75". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.574748 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c51cea52-ce54-4855-9d4c-97817c4cc6b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.574814 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c852e5a-26fe-4905-8483-4619c280f9c0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.574840 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpj84\" (UniqueName: \"kubernetes.io/projected/9c852e5a-26fe-4905-8483-4619c280f9c0-kube-api-access-mpj84\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.574866 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfv75\" (UniqueName: \"kubernetes.io/projected/c51cea52-ce54-4855-9d4c-97817c4cc6b0-kube-api-access-jfv75\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.951673 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bfdd-account-create-update-rws4k" event={"ID":"c51cea52-ce54-4855-9d4c-97817c4cc6b0","Type":"ContainerDied","Data":"4007cf1b199fca1bb0e11ca4dcb702cf826f20a774a65d870161c3df8f2c9437"} Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.952072 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4007cf1b199fca1bb0e11ca4dcb702cf826f20a774a65d870161c3df8f2c9437" Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.951712 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bfdd-account-create-update-rws4k" Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.955268 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-z87kx" event={"ID":"3b89146d-a545-4525-8744-723e0d9248b5","Type":"ContainerStarted","Data":"9a34bab1d66516a5177aafc62bed955fa80608af2d16da47596a9168353c819f"} Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.957899 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-8p487" event={"ID":"9c852e5a-26fe-4905-8483-4619c280f9c0","Type":"ContainerDied","Data":"aa5e8244172d22df4dd0e5e74f8d5b534773098b946d823ca2d7f01ebe48feae"} Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.957972 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa5e8244172d22df4dd0e5e74f8d5b534773098b946d823ca2d7f01ebe48feae" Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.958058 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8p487" Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.985620 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-z87kx" podStartSLOduration=2.451864836 podStartE2EDuration="7.98559588s" podCreationTimestamp="2026-02-02 07:05:15 +0000 UTC" firstStartedPulling="2026-02-02 07:05:16.666315106 +0000 UTC m=+1142.043583038" lastFinishedPulling="2026-02-02 07:05:22.20004617 +0000 UTC m=+1147.577314082" observedRunningTime="2026-02-02 07:05:22.97625568 +0000 UTC m=+1148.353523602" watchObservedRunningTime="2026-02-02 07:05:22.98559588 +0000 UTC m=+1148.362863802" Feb 02 07:05:25 crc kubenswrapper[4842]: I0202 07:05:25.993554 4842 generic.go:334] "Generic (PLEG): container finished" podID="3b89146d-a545-4525-8744-723e0d9248b5" containerID="9a34bab1d66516a5177aafc62bed955fa80608af2d16da47596a9168353c819f" exitCode=0 Feb 02 07:05:25 crc kubenswrapper[4842]: I0202 07:05:25.993644 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-z87kx" event={"ID":"3b89146d-a545-4525-8744-723e0d9248b5","Type":"ContainerDied","Data":"9a34bab1d66516a5177aafc62bed955fa80608af2d16da47596a9168353c819f"} Feb 02 07:05:27 crc kubenswrapper[4842]: I0202 07:05:27.427303 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-z87kx" Feb 02 07:05:27 crc kubenswrapper[4842]: I0202 07:05:27.484909 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b89146d-a545-4525-8744-723e0d9248b5-combined-ca-bundle\") pod \"3b89146d-a545-4525-8744-723e0d9248b5\" (UID: \"3b89146d-a545-4525-8744-723e0d9248b5\") " Feb 02 07:05:27 crc kubenswrapper[4842]: I0202 07:05:27.484994 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpxv4\" (UniqueName: \"kubernetes.io/projected/3b89146d-a545-4525-8744-723e0d9248b5-kube-api-access-xpxv4\") pod \"3b89146d-a545-4525-8744-723e0d9248b5\" (UID: \"3b89146d-a545-4525-8744-723e0d9248b5\") " Feb 02 07:05:27 crc kubenswrapper[4842]: I0202 07:05:27.485073 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b89146d-a545-4525-8744-723e0d9248b5-config-data\") pod \"3b89146d-a545-4525-8744-723e0d9248b5\" (UID: \"3b89146d-a545-4525-8744-723e0d9248b5\") " Feb 02 07:05:27 crc kubenswrapper[4842]: I0202 07:05:27.502040 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b89146d-a545-4525-8744-723e0d9248b5-kube-api-access-xpxv4" (OuterVolumeSpecName: "kube-api-access-xpxv4") pod "3b89146d-a545-4525-8744-723e0d9248b5" (UID: "3b89146d-a545-4525-8744-723e0d9248b5"). InnerVolumeSpecName "kube-api-access-xpxv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:05:27 crc kubenswrapper[4842]: I0202 07:05:27.527808 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b89146d-a545-4525-8744-723e0d9248b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b89146d-a545-4525-8744-723e0d9248b5" (UID: "3b89146d-a545-4525-8744-723e0d9248b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:27 crc kubenswrapper[4842]: I0202 07:05:27.528611 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b89146d-a545-4525-8744-723e0d9248b5-config-data" (OuterVolumeSpecName: "config-data") pod "3b89146d-a545-4525-8744-723e0d9248b5" (UID: "3b89146d-a545-4525-8744-723e0d9248b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:27 crc kubenswrapper[4842]: I0202 07:05:27.588166 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b89146d-a545-4525-8744-723e0d9248b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:27 crc kubenswrapper[4842]: I0202 07:05:27.588248 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpxv4\" (UniqueName: \"kubernetes.io/projected/3b89146d-a545-4525-8744-723e0d9248b5-kube-api-access-xpxv4\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:27 crc kubenswrapper[4842]: I0202 07:05:27.588273 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b89146d-a545-4525-8744-723e0d9248b5-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.018051 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-z87kx" event={"ID":"3b89146d-a545-4525-8744-723e0d9248b5","Type":"ContainerDied","Data":"9c624bec2cfab2b93f6c6a45dcd225604c34747efe7f2303db55b6d98511faf5"} Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.018110 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c624bec2cfab2b93f6c6a45dcd225604c34747efe7f2303db55b6d98511faf5" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.018177 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-z87kx" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.322711 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-t96rz"] Feb 02 07:05:28 crc kubenswrapper[4842]: E0202 07:05:28.323189 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6418a243-5699-42a3-8fab-d65c530c9951" containerName="mariadb-database-create" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.323294 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="6418a243-5699-42a3-8fab-d65c530c9951" containerName="mariadb-database-create" Feb 02 07:05:28 crc kubenswrapper[4842]: E0202 07:05:28.323351 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c51cea52-ce54-4855-9d4c-97817c4cc6b0" containerName="mariadb-account-create-update" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.323395 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c51cea52-ce54-4855-9d4c-97817c4cc6b0" containerName="mariadb-account-create-update" Feb 02 07:05:28 crc kubenswrapper[4842]: E0202 07:05:28.323450 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27c72b5c-16bb-4404-8c00-6b37ed7d9b47" containerName="mariadb-database-create" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.323493 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="27c72b5c-16bb-4404-8c00-6b37ed7d9b47" containerName="mariadb-database-create" Feb 02 07:05:28 crc kubenswrapper[4842]: E0202 07:05:28.323543 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f57fef97-6ad3-4b54-9859-2b33853f7f6d" containerName="dnsmasq-dns" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.323586 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f57fef97-6ad3-4b54-9859-2b33853f7f6d" containerName="dnsmasq-dns" Feb 02 07:05:28 crc kubenswrapper[4842]: E0202 07:05:28.323633 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c852e5a-26fe-4905-8483-4619c280f9c0" containerName="mariadb-database-create" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.323680 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c852e5a-26fe-4905-8483-4619c280f9c0" containerName="mariadb-database-create" Feb 02 07:05:28 crc kubenswrapper[4842]: E0202 07:05:28.323740 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b89146d-a545-4525-8744-723e0d9248b5" containerName="keystone-db-sync" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.323784 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b89146d-a545-4525-8744-723e0d9248b5" containerName="keystone-db-sync" Feb 02 07:05:28 crc kubenswrapper[4842]: E0202 07:05:28.323834 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f57fef97-6ad3-4b54-9859-2b33853f7f6d" containerName="init" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.323877 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f57fef97-6ad3-4b54-9859-2b33853f7f6d" containerName="init" Feb 02 07:05:28 crc kubenswrapper[4842]: E0202 07:05:28.323932 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1ffaeb5-5dc3-4ead-8b43-701f81a8c965" containerName="mariadb-account-create-update" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.323979 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1ffaeb5-5dc3-4ead-8b43-701f81a8c965" containerName="mariadb-account-create-update" Feb 02 07:05:28 crc kubenswrapper[4842]: E0202 07:05:28.324032 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d82484f3-c883-4c12-8ca1-6de8ead67139" containerName="mariadb-account-create-update" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.324129 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="d82484f3-c883-4c12-8ca1-6de8ead67139" containerName="mariadb-account-create-update" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.326432 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="27c72b5c-16bb-4404-8c00-6b37ed7d9b47" containerName="mariadb-database-create" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.326532 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="6418a243-5699-42a3-8fab-d65c530c9951" containerName="mariadb-database-create" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.326591 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c852e5a-26fe-4905-8483-4619c280f9c0" containerName="mariadb-database-create" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.326647 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="c51cea52-ce54-4855-9d4c-97817c4cc6b0" containerName="mariadb-account-create-update" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.326697 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b89146d-a545-4525-8744-723e0d9248b5" containerName="keystone-db-sync" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.326742 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="d82484f3-c883-4c12-8ca1-6de8ead67139" containerName="mariadb-account-create-update" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.326792 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="f57fef97-6ad3-4b54-9859-2b33853f7f6d" containerName="dnsmasq-dns" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.326845 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1ffaeb5-5dc3-4ead-8b43-701f81a8c965" containerName="mariadb-account-create-update" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.327683 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.346090 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-r6tjh"] Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.347601 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.352679 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.352799 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.352878 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.353068 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.353138 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-t96rz"] Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.353241 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6drft" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.381275 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-r6tjh"] Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.400319 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j4fz\" (UniqueName: \"kubernetes.io/projected/7451d324-f6ed-4ad3-aacb-875192778c83-kube-api-access-4j4fz\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.400364 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-ovsdbserver-nb\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.400392 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-fernet-keys\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.400447 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29cd6\" (UniqueName: \"kubernetes.io/projected/34848244-9de8-4950-8a9a-7e571c3104c9-kube-api-access-29cd6\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.400470 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-dns-swift-storage-0\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.400491 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-credential-keys\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.400511 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-combined-ca-bundle\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.400536 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-scripts\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.400578 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-config\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.400624 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-ovsdbserver-sb\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.400664 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-dns-svc\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.400686 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-config-data\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.502643 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-combined-ca-bundle\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.502686 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-scripts\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.502742 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-config\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.502776 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-ovsdbserver-sb\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.503727 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-ovsdbserver-sb\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.503750 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-config\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.504457 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-config-data\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.504572 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-dns-svc\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.504703 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j4fz\" (UniqueName: \"kubernetes.io/projected/7451d324-f6ed-4ad3-aacb-875192778c83-kube-api-access-4j4fz\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.505085 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-ovsdbserver-nb\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.505156 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-dns-svc\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.505274 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-fernet-keys\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.505402 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-dns-swift-storage-0\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.506159 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-credential-keys\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.506577 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29cd6\" (UniqueName: \"kubernetes.io/projected/34848244-9de8-4950-8a9a-7e571c3104c9-kube-api-access-29cd6\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.505875 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-ovsdbserver-nb\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.506124 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-dns-swift-storage-0\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.508399 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-fernet-keys\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.508447 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-config-data\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.508603 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-scripts\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.508866 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-combined-ca-bundle\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.509836 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-credential-keys\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.559200 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-phj68"] Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.560154 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.565546 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j4fz\" (UniqueName: \"kubernetes.io/projected/7451d324-f6ed-4ad3-aacb-875192778c83-kube-api-access-4j4fz\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.569635 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.571318 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.571486 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-fr64b" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.587026 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29cd6\" (UniqueName: \"kubernetes.io/projected/34848244-9de8-4950-8a9a-7e571c3104c9-kube-api-access-29cd6\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.609196 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-scripts\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.609259 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-combined-ca-bundle\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.609285 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-config-data\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.609321 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4nz2\" (UniqueName: \"kubernetes.io/projected/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-kube-api-access-v4nz2\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.609411 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-etc-machine-id\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.609452 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-db-sync-config-data\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.634283 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-phj68"] Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.644298 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-rpkx6"] Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.648368 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rpkx6" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.649272 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.667934 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qlr5t" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.668128 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.669736 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.670837 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.672557 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-rpkx6"] Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.710406 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-etc-machine-id\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.710453 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c49955b5-5145-4939-91e5-280569e18a33-combined-ca-bundle\") pod \"neutron-db-sync-rpkx6\" (UID: \"c49955b5-5145-4939-91e5-280569e18a33\") " pod="openstack/neutron-db-sync-rpkx6" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.710488 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-db-sync-config-data\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.710530 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-scripts\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.710546 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-combined-ca-bundle\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.710565 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-config-data\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.710613 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4nz2\" (UniqueName: \"kubernetes.io/projected/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-kube-api-access-v4nz2\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.710644 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c49955b5-5145-4939-91e5-280569e18a33-config\") pod \"neutron-db-sync-rpkx6\" (UID: \"c49955b5-5145-4939-91e5-280569e18a33\") " pod="openstack/neutron-db-sync-rpkx6" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.710682 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xhng\" (UniqueName: \"kubernetes.io/projected/c49955b5-5145-4939-91e5-280569e18a33-kube-api-access-4xhng\") pod \"neutron-db-sync-rpkx6\" (UID: \"c49955b5-5145-4939-91e5-280569e18a33\") " pod="openstack/neutron-db-sync-rpkx6" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.710769 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-etc-machine-id\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.714814 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-db-sync-config-data\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.721408 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-scripts\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.722842 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-combined-ca-bundle\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.725363 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-config-data\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.755183 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4nz2\" (UniqueName: \"kubernetes.io/projected/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-kube-api-access-v4nz2\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.795063 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-sjstk"] Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.796392 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-sjstk" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.801396 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.802131 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-drtzj" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.813119 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-combined-ca-bundle\") pod \"barbican-db-sync-sjstk\" (UID: \"80249ec8-3d5a-4020-bed2-83b8ecd32ab9\") " pod="openstack/barbican-db-sync-sjstk" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.813174 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xhng\" (UniqueName: \"kubernetes.io/projected/c49955b5-5145-4939-91e5-280569e18a33-kube-api-access-4xhng\") pod \"neutron-db-sync-rpkx6\" (UID: \"c49955b5-5145-4939-91e5-280569e18a33\") " pod="openstack/neutron-db-sync-rpkx6" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.813223 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c49955b5-5145-4939-91e5-280569e18a33-combined-ca-bundle\") pod \"neutron-db-sync-rpkx6\" (UID: \"c49955b5-5145-4939-91e5-280569e18a33\") " pod="openstack/neutron-db-sync-rpkx6" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.813276 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz6l8\" (UniqueName: \"kubernetes.io/projected/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-kube-api-access-nz6l8\") pod \"barbican-db-sync-sjstk\" (UID: \"80249ec8-3d5a-4020-bed2-83b8ecd32ab9\") " pod="openstack/barbican-db-sync-sjstk" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.813341 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-db-sync-config-data\") pod \"barbican-db-sync-sjstk\" (UID: \"80249ec8-3d5a-4020-bed2-83b8ecd32ab9\") " pod="openstack/barbican-db-sync-sjstk" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.813403 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c49955b5-5145-4939-91e5-280569e18a33-config\") pod \"neutron-db-sync-rpkx6\" (UID: \"c49955b5-5145-4939-91e5-280569e18a33\") " pod="openstack/neutron-db-sync-rpkx6" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.819960 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-sjstk"] Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.826031 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c49955b5-5145-4939-91e5-280569e18a33-config\") pod \"neutron-db-sync-rpkx6\" (UID: \"c49955b5-5145-4939-91e5-280569e18a33\") " pod="openstack/neutron-db-sync-rpkx6" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.835032 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c49955b5-5145-4939-91e5-280569e18a33-combined-ca-bundle\") pod \"neutron-db-sync-rpkx6\" (UID: \"c49955b5-5145-4939-91e5-280569e18a33\") " pod="openstack/neutron-db-sync-rpkx6" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.849073 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-t96rz"] Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.856311 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.858355 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.864171 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xhng\" (UniqueName: \"kubernetes.io/projected/c49955b5-5145-4939-91e5-280569e18a33-kube-api-access-4xhng\") pod \"neutron-db-sync-rpkx6\" (UID: \"c49955b5-5145-4939-91e5-280569e18a33\") " pod="openstack/neutron-db-sync-rpkx6" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.883517 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.883714 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.902016 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.916674 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.916763 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7aab5ec-829b-42dd-89db-74e28ab9346e-log-httpd\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.916783 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-config-data\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.916810 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-db-sync-config-data\") pod \"barbican-db-sync-sjstk\" (UID: \"80249ec8-3d5a-4020-bed2-83b8ecd32ab9\") " pod="openstack/barbican-db-sync-sjstk" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.916829 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-scripts\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.916864 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.916886 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7aab5ec-829b-42dd-89db-74e28ab9346e-run-httpd\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.916922 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-combined-ca-bundle\") pod \"barbican-db-sync-sjstk\" (UID: \"80249ec8-3d5a-4020-bed2-83b8ecd32ab9\") " pod="openstack/barbican-db-sync-sjstk" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.916976 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2576\" (UniqueName: \"kubernetes.io/projected/e7aab5ec-829b-42dd-89db-74e28ab9346e-kube-api-access-h2576\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.916996 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz6l8\" (UniqueName: \"kubernetes.io/projected/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-kube-api-access-nz6l8\") pod \"barbican-db-sync-sjstk\" (UID: \"80249ec8-3d5a-4020-bed2-83b8ecd32ab9\") " pod="openstack/barbican-db-sync-sjstk" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.923035 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-b8t4s"] Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.981960 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-db-sync-config-data\") pod \"barbican-db-sync-sjstk\" (UID: \"80249ec8-3d5a-4020-bed2-83b8ecd32ab9\") " pod="openstack/barbican-db-sync-sjstk" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.924190 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-combined-ca-bundle\") pod \"barbican-db-sync-sjstk\" (UID: \"80249ec8-3d5a-4020-bed2-83b8ecd32ab9\") " pod="openstack/barbican-db-sync-sjstk" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.990079 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.999228 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz6l8\" (UniqueName: \"kubernetes.io/projected/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-kube-api-access-nz6l8\") pod \"barbican-db-sync-sjstk\" (UID: \"80249ec8-3d5a-4020-bed2-83b8ecd32ab9\") " pod="openstack/barbican-db-sync-sjstk" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.010290 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-2ddsf"] Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.031410 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.035119 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.035311 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.035349 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7aab5ec-829b-42dd-89db-74e28ab9346e-run-httpd\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.035538 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2576\" (UniqueName: \"kubernetes.io/projected/e7aab5ec-829b-42dd-89db-74e28ab9346e-kube-api-access-h2576\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.035587 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.035626 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7aab5ec-829b-42dd-89db-74e28ab9346e-log-httpd\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.035647 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-config-data\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.035696 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-scripts\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.036798 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7aab5ec-829b-42dd-89db-74e28ab9346e-run-httpd\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.038561 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7aab5ec-829b-42dd-89db-74e28ab9346e-log-httpd\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.039160 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rpkx6" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.039216 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rf5dt" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.039552 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.039668 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.043905 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-scripts\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.045559 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-config-data\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.045769 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.051910 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.052205 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-b8t4s"] Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.058050 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2576\" (UniqueName: \"kubernetes.io/projected/e7aab5ec-829b-42dd-89db-74e28ab9346e-kube-api-access-h2576\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.061094 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2ddsf"] Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.139698 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.139774 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.140171 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-config\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.140212 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.140853 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-config-data\") pod \"placement-db-sync-2ddsf\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.140887 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trrjw\" (UniqueName: \"kubernetes.io/projected/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-kube-api-access-trrjw\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.140912 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-672bm\" (UniqueName: \"kubernetes.io/projected/fff8a308-89ab-409f-9053-6a363794df83-kube-api-access-672bm\") pod \"placement-db-sync-2ddsf\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.140928 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-scripts\") pod \"placement-db-sync-2ddsf\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.140968 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fff8a308-89ab-409f-9053-6a363794df83-logs\") pod \"placement-db-sync-2ddsf\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.140997 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-combined-ca-bundle\") pod \"placement-db-sync-2ddsf\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.141013 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.150680 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-sjstk" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.214676 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.242242 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.242363 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-config-data\") pod \"placement-db-sync-2ddsf\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.242391 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trrjw\" (UniqueName: \"kubernetes.io/projected/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-kube-api-access-trrjw\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.242412 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-672bm\" (UniqueName: \"kubernetes.io/projected/fff8a308-89ab-409f-9053-6a363794df83-kube-api-access-672bm\") pod \"placement-db-sync-2ddsf\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.242427 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-scripts\") pod \"placement-db-sync-2ddsf\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.243137 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fff8a308-89ab-409f-9053-6a363794df83-logs\") pod \"placement-db-sync-2ddsf\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.243171 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-combined-ca-bundle\") pod \"placement-db-sync-2ddsf\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.243189 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.243224 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.243315 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.243335 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-config\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.244139 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-config\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.244478 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.244702 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.244783 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.244801 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fff8a308-89ab-409f-9053-6a363794df83-logs\") pod \"placement-db-sync-2ddsf\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.245637 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.246229 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-scripts\") pod \"placement-db-sync-2ddsf\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.248900 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-config-data\") pod \"placement-db-sync-2ddsf\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.251089 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-combined-ca-bundle\") pod \"placement-db-sync-2ddsf\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.260844 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-672bm\" (UniqueName: \"kubernetes.io/projected/fff8a308-89ab-409f-9053-6a363794df83-kube-api-access-672bm\") pod \"placement-db-sync-2ddsf\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.262113 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trrjw\" (UniqueName: \"kubernetes.io/projected/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-kube-api-access-trrjw\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.320196 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.361485 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-r6tjh"] Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.365143 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.459944 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-t96rz"] Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.481783 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.483540 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.491955 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.492012 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fpq5h" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.492170 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.492229 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.511922 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.519009 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.520397 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.522315 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.522597 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.527008 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.625658 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-rpkx6"] Feb 02 07:05:29 crc kubenswrapper[4842]: W0202 07:05:29.635692 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc49955b5_5145_4939_91e5_280569e18a33.slice/crio-08a767625ea93aec62299911058dda75d17c5c29e2b78dca21a6a44b37d4a3ec WatchSource:0}: Error finding container 08a767625ea93aec62299911058dda75d17c5c29e2b78dca21a6a44b37d4a3ec: Status 404 returned error can't find the container with id 08a767625ea93aec62299911058dda75d17c5c29e2b78dca21a6a44b37d4a3ec Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.636390 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-phj68"] Feb 02 07:05:29 crc kubenswrapper[4842]: W0202 07:05:29.644811 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9f1c72e_953b_45ba_ba69_c7574f82e8ad.slice/crio-e0942641dc8319ec78eeb7f961a7a30b1fb70ac7a621c74e1e520f1227c8c704 WatchSource:0}: Error finding container e0942641dc8319ec78eeb7f961a7a30b1fb70ac7a621c74e1e520f1227c8c704: Status 404 returned error can't find the container with id e0942641dc8319ec78eeb7f961a7a30b1fb70ac7a621c74e1e520f1227c8c704 Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.648116 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.648151 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0083ea44-21b0-492b-971b-671241ff8abc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.648169 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn2sg\" (UniqueName: \"kubernetes.io/projected/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-kube-api-access-dn2sg\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.648186 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.648208 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.648236 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86gvt\" (UniqueName: \"kubernetes.io/projected/0083ea44-21b0-492b-971b-671241ff8abc-kube-api-access-86gvt\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.648255 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0083ea44-21b0-492b-971b-671241ff8abc-logs\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.648289 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.648302 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-scripts\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.648344 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.648363 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.648396 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.648420 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.648448 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-config-data\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.648493 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-logs\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.648509 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.749610 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0083ea44-21b0-492b-971b-671241ff8abc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.749645 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn2sg\" (UniqueName: \"kubernetes.io/projected/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-kube-api-access-dn2sg\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.749665 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.749685 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.749712 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86gvt\" (UniqueName: \"kubernetes.io/projected/0083ea44-21b0-492b-971b-671241ff8abc-kube-api-access-86gvt\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.749730 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0083ea44-21b0-492b-971b-671241ff8abc-logs\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.749750 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.749764 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-scripts\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.749787 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.749804 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.749844 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.749869 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.749898 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-config-data\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.749953 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-logs\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.749971 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.750002 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.750178 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0083ea44-21b0-492b-971b-671241ff8abc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.750607 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.751562 4842 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.751598 4842 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.751624 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0083ea44-21b0-492b-971b-671241ff8abc-logs\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.752159 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-logs\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.753675 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-scripts\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.754912 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.755428 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.763273 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-config-data\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.767377 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.773971 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.777307 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86gvt\" (UniqueName: \"kubernetes.io/projected/0083ea44-21b0-492b-971b-671241ff8abc-kube-api-access-86gvt\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.778294 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.778917 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.806289 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn2sg\" (UniqueName: \"kubernetes.io/projected/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-kube-api-access-dn2sg\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.827181 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.833249 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.836205 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-sjstk"] Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.840500 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.889607 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-b8t4s"] Feb 02 07:05:29 crc kubenswrapper[4842]: W0202 07:05:29.995762 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfff8a308_89ab_409f_9053_6a363794df83.slice/crio-7cc030eb3eb4272b409ce92adc2a7190b5a997425fe481081c2cb7830167dd33 WatchSource:0}: Error finding container 7cc030eb3eb4272b409ce92adc2a7190b5a997425fe481081c2cb7830167dd33: Status 404 returned error can't find the container with id 7cc030eb3eb4272b409ce92adc2a7190b5a997425fe481081c2cb7830167dd33 Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.996014 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2ddsf"] Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.066085 4842 generic.go:334] "Generic (PLEG): container finished" podID="7451d324-f6ed-4ad3-aacb-875192778c83" containerID="ada27da3da689853a4b7facfad88a4f4ff5e03c7c2e70f234e5841ce1d04d4c9" exitCode=0 Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.066162 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" event={"ID":"7451d324-f6ed-4ad3-aacb-875192778c83","Type":"ContainerDied","Data":"ada27da3da689853a4b7facfad88a4f4ff5e03c7c2e70f234e5841ce1d04d4c9"} Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.066188 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" event={"ID":"7451d324-f6ed-4ad3-aacb-875192778c83","Type":"ContainerStarted","Data":"2a917ef164d764f672ca6277247b623a831a1cc93b3d32269b491951233d1ed8"} Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.068637 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-sjstk" event={"ID":"80249ec8-3d5a-4020-bed2-83b8ecd32ab9","Type":"ContainerStarted","Data":"cd2d0997e2cc127c80bb06f907a598f4209b55d656a3634a4391e4cc9d674026"} Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.070858 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rpkx6" event={"ID":"c49955b5-5145-4939-91e5-280569e18a33","Type":"ContainerStarted","Data":"e6c087a85acb8c56b9934f5572a1bcc68f491cf79f0f8b755c20d672d211503e"} Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.070898 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rpkx6" event={"ID":"c49955b5-5145-4939-91e5-280569e18a33","Type":"ContainerStarted","Data":"08a767625ea93aec62299911058dda75d17c5c29e2b78dca21a6a44b37d4a3ec"} Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.074508 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7aab5ec-829b-42dd-89db-74e28ab9346e","Type":"ContainerStarted","Data":"7ea6f3db6a36a7dee937382b0699d18f0905deeb5700b93c12a3f06c02d6628f"} Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.074615 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.091631 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.093346 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2ddsf" event={"ID":"fff8a308-89ab-409f-9053-6a363794df83","Type":"ContainerStarted","Data":"7cc030eb3eb4272b409ce92adc2a7190b5a997425fe481081c2cb7830167dd33"} Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.107513 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-phj68" event={"ID":"d9f1c72e-953b-45ba-ba69-c7574f82e8ad","Type":"ContainerStarted","Data":"e0942641dc8319ec78eeb7f961a7a30b1fb70ac7a621c74e1e520f1227c8c704"} Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.112716 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" event={"ID":"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49","Type":"ContainerStarted","Data":"3bf1c02d1eb4a6fd6bfb8e0d7089ca1be72bb9eccd12b09bde66e78b797862a2"} Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.119730 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-rpkx6" podStartSLOduration=2.119705626 podStartE2EDuration="2.119705626s" podCreationTimestamp="2026-02-02 07:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:30.117587384 +0000 UTC m=+1155.494855286" watchObservedRunningTime="2026-02-02 07:05:30.119705626 +0000 UTC m=+1155.496973538" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.121313 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r6tjh" event={"ID":"34848244-9de8-4950-8a9a-7e571c3104c9","Type":"ContainerStarted","Data":"7195db1dd98fa99bf79467abe2ecc6133db9df280df7df78ae67b06d2ce5fe42"} Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.121362 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r6tjh" event={"ID":"34848244-9de8-4950-8a9a-7e571c3104c9","Type":"ContainerStarted","Data":"fd34e55492114d1dc15256d5270c613a7bb387100ffe277e3f9d66d6fd42c42e"} Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.205140 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-r6tjh" podStartSLOduration=2.20510737 podStartE2EDuration="2.20510737s" podCreationTimestamp="2026-02-02 07:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:30.202483436 +0000 UTC m=+1155.579751348" watchObservedRunningTime="2026-02-02 07:05:30.20510737 +0000 UTC m=+1155.582375282" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.460197 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.566330 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-ovsdbserver-sb\") pod \"7451d324-f6ed-4ad3-aacb-875192778c83\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.566450 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j4fz\" (UniqueName: \"kubernetes.io/projected/7451d324-f6ed-4ad3-aacb-875192778c83-kube-api-access-4j4fz\") pod \"7451d324-f6ed-4ad3-aacb-875192778c83\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.566507 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-dns-svc\") pod \"7451d324-f6ed-4ad3-aacb-875192778c83\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.566599 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-config\") pod \"7451d324-f6ed-4ad3-aacb-875192778c83\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.566614 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-dns-swift-storage-0\") pod \"7451d324-f6ed-4ad3-aacb-875192778c83\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.566631 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-ovsdbserver-nb\") pod \"7451d324-f6ed-4ad3-aacb-875192778c83\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.574018 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7451d324-f6ed-4ad3-aacb-875192778c83-kube-api-access-4j4fz" (OuterVolumeSpecName: "kube-api-access-4j4fz") pod "7451d324-f6ed-4ad3-aacb-875192778c83" (UID: "7451d324-f6ed-4ad3-aacb-875192778c83"). InnerVolumeSpecName "kube-api-access-4j4fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.602089 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7451d324-f6ed-4ad3-aacb-875192778c83" (UID: "7451d324-f6ed-4ad3-aacb-875192778c83"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.602110 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7451d324-f6ed-4ad3-aacb-875192778c83" (UID: "7451d324-f6ed-4ad3-aacb-875192778c83"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.620058 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7451d324-f6ed-4ad3-aacb-875192778c83" (UID: "7451d324-f6ed-4ad3-aacb-875192778c83"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.620722 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-config" (OuterVolumeSpecName: "config") pod "7451d324-f6ed-4ad3-aacb-875192778c83" (UID: "7451d324-f6ed-4ad3-aacb-875192778c83"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.627374 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7451d324-f6ed-4ad3-aacb-875192778c83" (UID: "7451d324-f6ed-4ad3-aacb-875192778c83"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.670615 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4j4fz\" (UniqueName: \"kubernetes.io/projected/7451d324-f6ed-4ad3-aacb-875192778c83-kube-api-access-4j4fz\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.670858 4842 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.670871 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.670880 4842 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.670892 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.670900 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.748771 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:05:31 crc kubenswrapper[4842]: I0202 07:05:31.159071 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0083ea44-21b0-492b-971b-671241ff8abc","Type":"ContainerStarted","Data":"8c23fbb0fff0a16501dd8fc713b53a51e1c6260cd6f5e5446454a32930538b9a"} Feb 02 07:05:31 crc kubenswrapper[4842]: I0202 07:05:31.171651 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:05:31 crc kubenswrapper[4842]: I0202 07:05:31.172740 4842 generic.go:334] "Generic (PLEG): container finished" podID="cc29f5ed-e410-4d0a-ae66-ab78c89c6a49" containerID="b65de85796493b7fd1d1b4d84ddbf8a0d1cb6cbceca0fba243ff835d64eb5002" exitCode=0 Feb 02 07:05:31 crc kubenswrapper[4842]: I0202 07:05:31.172833 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" event={"ID":"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49","Type":"ContainerDied","Data":"b65de85796493b7fd1d1b4d84ddbf8a0d1cb6cbceca0fba243ff835d64eb5002"} Feb 02 07:05:31 crc kubenswrapper[4842]: I0202 07:05:31.197124 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:31 crc kubenswrapper[4842]: I0202 07:05:31.197668 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" event={"ID":"7451d324-f6ed-4ad3-aacb-875192778c83","Type":"ContainerDied","Data":"2a917ef164d764f672ca6277247b623a831a1cc93b3d32269b491951233d1ed8"} Feb 02 07:05:31 crc kubenswrapper[4842]: I0202 07:05:31.197699 4842 scope.go:117] "RemoveContainer" containerID="ada27da3da689853a4b7facfad88a4f4ff5e03c7c2e70f234e5841ce1d04d4c9" Feb 02 07:05:31 crc kubenswrapper[4842]: I0202 07:05:31.290634 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:05:31 crc kubenswrapper[4842]: I0202 07:05:31.300014 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:05:31 crc kubenswrapper[4842]: I0202 07:05:31.330578 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-t96rz"] Feb 02 07:05:31 crc kubenswrapper[4842]: I0202 07:05:31.361628 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-t96rz"] Feb 02 07:05:31 crc kubenswrapper[4842]: I0202 07:05:31.451795 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7451d324-f6ed-4ad3-aacb-875192778c83" path="/var/lib/kubelet/pods/7451d324-f6ed-4ad3-aacb-875192778c83/volumes" Feb 02 07:05:31 crc kubenswrapper[4842]: I0202 07:05:31.731869 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:05:31 crc kubenswrapper[4842]: W0202 07:05:31.753483 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbfcf9b2_c06f_457c_a13c_b3dd8399eb89.slice/crio-75b912a951245e0e56c8a52eef30076143aeede0c081fb4651fe4e34d2509d66 WatchSource:0}: Error finding container 75b912a951245e0e56c8a52eef30076143aeede0c081fb4651fe4e34d2509d66: Status 404 returned error can't find the container with id 75b912a951245e0e56c8a52eef30076143aeede0c081fb4651fe4e34d2509d66 Feb 02 07:05:32 crc kubenswrapper[4842]: I0202 07:05:32.229944 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0083ea44-21b0-492b-971b-671241ff8abc","Type":"ContainerStarted","Data":"ccde2cd433c74600bcdce93601254d9511293f06a63ab6132e87513d3754c1e9"} Feb 02 07:05:32 crc kubenswrapper[4842]: I0202 07:05:32.231918 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89","Type":"ContainerStarted","Data":"75b912a951245e0e56c8a52eef30076143aeede0c081fb4651fe4e34d2509d66"} Feb 02 07:05:32 crc kubenswrapper[4842]: I0202 07:05:32.237875 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" event={"ID":"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49","Type":"ContainerStarted","Data":"070ececa81450530af921167c87446de2343f6f27873a844bed7018478edcd17"} Feb 02 07:05:32 crc kubenswrapper[4842]: I0202 07:05:32.238127 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:32 crc kubenswrapper[4842]: I0202 07:05:32.267638 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" podStartSLOduration=4.267600126 podStartE2EDuration="4.267600126s" podCreationTimestamp="2026-02-02 07:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:32.260852139 +0000 UTC m=+1157.638120061" watchObservedRunningTime="2026-02-02 07:05:32.267600126 +0000 UTC m=+1157.644868038" Feb 02 07:05:33 crc kubenswrapper[4842]: I0202 07:05:33.253320 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0083ea44-21b0-492b-971b-671241ff8abc","Type":"ContainerStarted","Data":"2fd96f80d20d678e2e8cc672e30a0503d912638602ef248f0350d2eed7a5acda"} Feb 02 07:05:33 crc kubenswrapper[4842]: I0202 07:05:33.253774 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0083ea44-21b0-492b-971b-671241ff8abc" containerName="glance-log" containerID="cri-o://ccde2cd433c74600bcdce93601254d9511293f06a63ab6132e87513d3754c1e9" gracePeriod=30 Feb 02 07:05:33 crc kubenswrapper[4842]: I0202 07:05:33.253884 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0083ea44-21b0-492b-971b-671241ff8abc" containerName="glance-httpd" containerID="cri-o://2fd96f80d20d678e2e8cc672e30a0503d912638602ef248f0350d2eed7a5acda" gracePeriod=30 Feb 02 07:05:33 crc kubenswrapper[4842]: I0202 07:05:33.257843 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89","Type":"ContainerStarted","Data":"c5982122d3335d8f8af9afed233b6885e136dd6acfc9481bba66caad8b099e8d"} Feb 02 07:05:33 crc kubenswrapper[4842]: I0202 07:05:33.257887 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89","Type":"ContainerStarted","Data":"d2517508f58a8b7c4c13459a97cc7ab9e10a897e173d407ff1912286e20ae247"} Feb 02 07:05:33 crc kubenswrapper[4842]: I0202 07:05:33.257944 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" containerName="glance-log" containerID="cri-o://d2517508f58a8b7c4c13459a97cc7ab9e10a897e173d407ff1912286e20ae247" gracePeriod=30 Feb 02 07:05:33 crc kubenswrapper[4842]: I0202 07:05:33.257947 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" containerName="glance-httpd" containerID="cri-o://c5982122d3335d8f8af9afed233b6885e136dd6acfc9481bba66caad8b099e8d" gracePeriod=30 Feb 02 07:05:33 crc kubenswrapper[4842]: I0202 07:05:33.277327 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.277308258 podStartE2EDuration="5.277308258s" podCreationTimestamp="2026-02-02 07:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:33.27492735 +0000 UTC m=+1158.652195262" watchObservedRunningTime="2026-02-02 07:05:33.277308258 +0000 UTC m=+1158.654576170" Feb 02 07:05:33 crc kubenswrapper[4842]: I0202 07:05:33.359131 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.359113114 podStartE2EDuration="5.359113114s" podCreationTimestamp="2026-02-02 07:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:33.350996314 +0000 UTC m=+1158.728264226" watchObservedRunningTime="2026-02-02 07:05:33.359113114 +0000 UTC m=+1158.736381026" Feb 02 07:05:34 crc kubenswrapper[4842]: I0202 07:05:34.273172 4842 generic.go:334] "Generic (PLEG): container finished" podID="0083ea44-21b0-492b-971b-671241ff8abc" containerID="2fd96f80d20d678e2e8cc672e30a0503d912638602ef248f0350d2eed7a5acda" exitCode=0 Feb 02 07:05:34 crc kubenswrapper[4842]: I0202 07:05:34.274523 4842 generic.go:334] "Generic (PLEG): container finished" podID="0083ea44-21b0-492b-971b-671241ff8abc" containerID="ccde2cd433c74600bcdce93601254d9511293f06a63ab6132e87513d3754c1e9" exitCode=143 Feb 02 07:05:34 crc kubenswrapper[4842]: I0202 07:05:34.273264 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0083ea44-21b0-492b-971b-671241ff8abc","Type":"ContainerDied","Data":"2fd96f80d20d678e2e8cc672e30a0503d912638602ef248f0350d2eed7a5acda"} Feb 02 07:05:34 crc kubenswrapper[4842]: I0202 07:05:34.274684 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0083ea44-21b0-492b-971b-671241ff8abc","Type":"ContainerDied","Data":"ccde2cd433c74600bcdce93601254d9511293f06a63ab6132e87513d3754c1e9"} Feb 02 07:05:34 crc kubenswrapper[4842]: I0202 07:05:34.278540 4842 generic.go:334] "Generic (PLEG): container finished" podID="bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" containerID="c5982122d3335d8f8af9afed233b6885e136dd6acfc9481bba66caad8b099e8d" exitCode=143 Feb 02 07:05:34 crc kubenswrapper[4842]: I0202 07:05:34.278565 4842 generic.go:334] "Generic (PLEG): container finished" podID="bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" containerID="d2517508f58a8b7c4c13459a97cc7ab9e10a897e173d407ff1912286e20ae247" exitCode=143 Feb 02 07:05:34 crc kubenswrapper[4842]: I0202 07:05:34.278572 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89","Type":"ContainerDied","Data":"c5982122d3335d8f8af9afed233b6885e136dd6acfc9481bba66caad8b099e8d"} Feb 02 07:05:34 crc kubenswrapper[4842]: I0202 07:05:34.278623 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89","Type":"ContainerDied","Data":"d2517508f58a8b7c4c13459a97cc7ab9e10a897e173d407ff1912286e20ae247"} Feb 02 07:05:34 crc kubenswrapper[4842]: I0202 07:05:34.280556 4842 generic.go:334] "Generic (PLEG): container finished" podID="34848244-9de8-4950-8a9a-7e571c3104c9" containerID="7195db1dd98fa99bf79467abe2ecc6133db9df280df7df78ae67b06d2ce5fe42" exitCode=0 Feb 02 07:05:34 crc kubenswrapper[4842]: I0202 07:05:34.280621 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r6tjh" event={"ID":"34848244-9de8-4950-8a9a-7e571c3104c9","Type":"ContainerDied","Data":"7195db1dd98fa99bf79467abe2ecc6133db9df280df7df78ae67b06d2ce5fe42"} Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.270817 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.322358 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.344161 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r6tjh" event={"ID":"34848244-9de8-4950-8a9a-7e571c3104c9","Type":"ContainerDied","Data":"fd34e55492114d1dc15256d5270c613a7bb387100ffe277e3f9d66d6fd42c42e"} Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.344204 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd34e55492114d1dc15256d5270c613a7bb387100ffe277e3f9d66d6fd42c42e" Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.344286 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.374992 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-scripts\") pod \"34848244-9de8-4950-8a9a-7e571c3104c9\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.375313 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-fernet-keys\") pod \"34848244-9de8-4950-8a9a-7e571c3104c9\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.375440 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-combined-ca-bundle\") pod \"34848244-9de8-4950-8a9a-7e571c3104c9\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.375547 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-credential-keys\") pod \"34848244-9de8-4950-8a9a-7e571c3104c9\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.375707 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29cd6\" (UniqueName: \"kubernetes.io/projected/34848244-9de8-4950-8a9a-7e571c3104c9-kube-api-access-29cd6\") pod \"34848244-9de8-4950-8a9a-7e571c3104c9\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.375845 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-config-data\") pod \"34848244-9de8-4950-8a9a-7e571c3104c9\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.385810 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "34848244-9de8-4950-8a9a-7e571c3104c9" (UID: "34848244-9de8-4950-8a9a-7e571c3104c9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.395263 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34848244-9de8-4950-8a9a-7e571c3104c9-kube-api-access-29cd6" (OuterVolumeSpecName: "kube-api-access-29cd6") pod "34848244-9de8-4950-8a9a-7e571c3104c9" (UID: "34848244-9de8-4950-8a9a-7e571c3104c9"). InnerVolumeSpecName "kube-api-access-29cd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.399524 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "34848244-9de8-4950-8a9a-7e571c3104c9" (UID: "34848244-9de8-4950-8a9a-7e571c3104c9"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.417653 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-scripts" (OuterVolumeSpecName: "scripts") pod "34848244-9de8-4950-8a9a-7e571c3104c9" (UID: "34848244-9de8-4950-8a9a-7e571c3104c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.470459 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34848244-9de8-4950-8a9a-7e571c3104c9" (UID: "34848244-9de8-4950-8a9a-7e571c3104c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.486171 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29cd6\" (UniqueName: \"kubernetes.io/projected/34848244-9de8-4950-8a9a-7e571c3104c9-kube-api-access-29cd6\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.486202 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.486214 4842 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.486240 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.486250 4842 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.497471 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-config-data" (OuterVolumeSpecName: "config-data") pod "34848244-9de8-4950-8a9a-7e571c3104c9" (UID: "34848244-9de8-4950-8a9a-7e571c3104c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.535684 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-h4x5j"] Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.535958 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" podUID="e793f6a1-ed49-496a-af57-84d696daf728" containerName="dnsmasq-dns" containerID="cri-o://b3a7c436e2e8d2b98b1b382d46734ec10fcb3fb8ee566aaba25f0dda55dc5702" gracePeriod=10 Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.587860 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.357110 4842 generic.go:334] "Generic (PLEG): container finished" podID="e793f6a1-ed49-496a-af57-84d696daf728" containerID="b3a7c436e2e8d2b98b1b382d46734ec10fcb3fb8ee566aaba25f0dda55dc5702" exitCode=0 Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.357163 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" event={"ID":"e793f6a1-ed49-496a-af57-84d696daf728","Type":"ContainerDied","Data":"b3a7c436e2e8d2b98b1b382d46734ec10fcb3fb8ee566aaba25f0dda55dc5702"} Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.367760 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" podUID="e793f6a1-ed49-496a-af57-84d696daf728" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: connect: connection refused" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.461486 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-r6tjh"] Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.492588 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-r6tjh"] Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.498989 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xh7mg"] Feb 02 07:05:40 crc kubenswrapper[4842]: E0202 07:05:40.499346 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34848244-9de8-4950-8a9a-7e571c3104c9" containerName="keystone-bootstrap" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.499361 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="34848244-9de8-4950-8a9a-7e571c3104c9" containerName="keystone-bootstrap" Feb 02 07:05:40 crc kubenswrapper[4842]: E0202 07:05:40.499378 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7451d324-f6ed-4ad3-aacb-875192778c83" containerName="init" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.499385 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="7451d324-f6ed-4ad3-aacb-875192778c83" containerName="init" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.499560 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="7451d324-f6ed-4ad3-aacb-875192778c83" containerName="init" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.499578 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="34848244-9de8-4950-8a9a-7e571c3104c9" containerName="keystone-bootstrap" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.500136 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.506951 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.507406 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6drft" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.507572 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.507592 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.507720 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.519788 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xh7mg"] Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.614127 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h92g4\" (UniqueName: \"kubernetes.io/projected/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-kube-api-access-h92g4\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.614175 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-scripts\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.614193 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-fernet-keys\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.614258 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-combined-ca-bundle\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.614279 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-config-data\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.614298 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-credential-keys\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.715610 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h92g4\" (UniqueName: \"kubernetes.io/projected/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-kube-api-access-h92g4\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.715654 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-scripts\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.715676 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-fernet-keys\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.715731 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-combined-ca-bundle\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.715749 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-config-data\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.715769 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-credential-keys\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.719927 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-scripts\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.720030 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-credential-keys\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.720433 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-combined-ca-bundle\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.729877 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-config-data\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.730040 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-fernet-keys\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.732292 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h92g4\" (UniqueName: \"kubernetes.io/projected/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-kube-api-access-h92g4\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.835334 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:41 crc kubenswrapper[4842]: I0202 07:05:41.450064 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34848244-9de8-4950-8a9a-7e571c3104c9" path="/var/lib/kubelet/pods/34848244-9de8-4950-8a9a-7e571c3104c9/volumes" Feb 02 07:05:45 crc kubenswrapper[4842]: I0202 07:05:45.367832 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" podUID="e793f6a1-ed49-496a-af57-84d696daf728" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: connect: connection refused" Feb 02 07:05:47 crc kubenswrapper[4842]: I0202 07:05:47.419237 4842 generic.go:334] "Generic (PLEG): container finished" podID="c49955b5-5145-4939-91e5-280569e18a33" containerID="e6c087a85acb8c56b9934f5572a1bcc68f491cf79f0f8b755c20d672d211503e" exitCode=0 Feb 02 07:05:47 crc kubenswrapper[4842]: I0202 07:05:47.419267 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rpkx6" event={"ID":"c49955b5-5145-4939-91e5-280569e18a33","Type":"ContainerDied","Data":"e6c087a85acb8c56b9934f5572a1bcc68f491cf79f0f8b755c20d672d211503e"} Feb 02 07:05:50 crc kubenswrapper[4842]: I0202 07:05:50.368431 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" podUID="e793f6a1-ed49-496a-af57-84d696daf728" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: connect: connection refused" Feb 02 07:05:50 crc kubenswrapper[4842]: I0202 07:05:50.368962 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.522876 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.535099 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.557313 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rpkx6" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647008 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn2sg\" (UniqueName: \"kubernetes.io/projected/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-kube-api-access-dn2sg\") pod \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647053 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c49955b5-5145-4939-91e5-280569e18a33-combined-ca-bundle\") pod \"c49955b5-5145-4939-91e5-280569e18a33\" (UID: \"c49955b5-5145-4939-91e5-280569e18a33\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647110 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0083ea44-21b0-492b-971b-671241ff8abc-httpd-run\") pod \"0083ea44-21b0-492b-971b-671241ff8abc\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647134 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-combined-ca-bundle\") pod \"0083ea44-21b0-492b-971b-671241ff8abc\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647152 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-httpd-run\") pod \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647200 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-scripts\") pod \"0083ea44-21b0-492b-971b-671241ff8abc\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647240 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-config-data\") pod \"0083ea44-21b0-492b-971b-671241ff8abc\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647267 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-internal-tls-certs\") pod \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647334 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-public-tls-certs\") pod \"0083ea44-21b0-492b-971b-671241ff8abc\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647348 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"0083ea44-21b0-492b-971b-671241ff8abc\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647388 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-config-data\") pod \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647409 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0083ea44-21b0-492b-971b-671241ff8abc-logs\") pod \"0083ea44-21b0-492b-971b-671241ff8abc\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647425 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-scripts\") pod \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647443 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-combined-ca-bundle\") pod \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647462 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-logs\") pod \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647485 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647503 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86gvt\" (UniqueName: \"kubernetes.io/projected/0083ea44-21b0-492b-971b-671241ff8abc-kube-api-access-86gvt\") pod \"0083ea44-21b0-492b-971b-671241ff8abc\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647523 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xhng\" (UniqueName: \"kubernetes.io/projected/c49955b5-5145-4939-91e5-280569e18a33-kube-api-access-4xhng\") pod \"c49955b5-5145-4939-91e5-280569e18a33\" (UID: \"c49955b5-5145-4939-91e5-280569e18a33\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647572 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c49955b5-5145-4939-91e5-280569e18a33-config\") pod \"c49955b5-5145-4939-91e5-280569e18a33\" (UID: \"c49955b5-5145-4939-91e5-280569e18a33\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647802 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0083ea44-21b0-492b-971b-671241ff8abc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0083ea44-21b0-492b-971b-671241ff8abc" (UID: "0083ea44-21b0-492b-971b-671241ff8abc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.648191 4842 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0083ea44-21b0-492b-971b-671241ff8abc-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.654795 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-scripts" (OuterVolumeSpecName: "scripts") pod "0083ea44-21b0-492b-971b-671241ff8abc" (UID: "0083ea44-21b0-492b-971b-671241ff8abc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.656846 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-scripts" (OuterVolumeSpecName: "scripts") pod "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" (UID: "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.657074 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0083ea44-21b0-492b-971b-671241ff8abc-logs" (OuterVolumeSpecName: "logs") pod "0083ea44-21b0-492b-971b-671241ff8abc" (UID: "0083ea44-21b0-492b-971b-671241ff8abc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.660234 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-kube-api-access-dn2sg" (OuterVolumeSpecName: "kube-api-access-dn2sg") pod "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" (UID: "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89"). InnerVolumeSpecName "kube-api-access-dn2sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.660695 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-logs" (OuterVolumeSpecName: "logs") pod "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" (UID: "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.663166 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0083ea44-21b0-492b-971b-671241ff8abc-kube-api-access-86gvt" (OuterVolumeSpecName: "kube-api-access-86gvt") pod "0083ea44-21b0-492b-971b-671241ff8abc" (UID: "0083ea44-21b0-492b-971b-671241ff8abc"). InnerVolumeSpecName "kube-api-access-86gvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.666169 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" (UID: "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.667922 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c49955b5-5145-4939-91e5-280569e18a33-kube-api-access-4xhng" (OuterVolumeSpecName: "kube-api-access-4xhng") pod "c49955b5-5145-4939-91e5-280569e18a33" (UID: "c49955b5-5145-4939-91e5-280569e18a33"). InnerVolumeSpecName "kube-api-access-4xhng". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.673759 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" (UID: "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.705071 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "0083ea44-21b0-492b-971b-671241ff8abc" (UID: "0083ea44-21b0-492b-971b-671241ff8abc"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.711828 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c49955b5-5145-4939-91e5-280569e18a33-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c49955b5-5145-4939-91e5-280569e18a33" (UID: "c49955b5-5145-4939-91e5-280569e18a33"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.733688 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c49955b5-5145-4939-91e5-280569e18a33-config" (OuterVolumeSpecName: "config") pod "c49955b5-5145-4939-91e5-280569e18a33" (UID: "c49955b5-5145-4939-91e5-280569e18a33"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.750127 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" (UID: "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.750236 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-combined-ca-bundle\") pod \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.750972 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c49955b5-5145-4939-91e5-280569e18a33-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.750990 4842 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.751000 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.751020 4842 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.751029 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0083ea44-21b0-492b-971b-671241ff8abc-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.751038 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.751045 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.751059 4842 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.751068 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86gvt\" (UniqueName: \"kubernetes.io/projected/0083ea44-21b0-492b-971b-671241ff8abc-kube-api-access-86gvt\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.751077 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xhng\" (UniqueName: \"kubernetes.io/projected/c49955b5-5145-4939-91e5-280569e18a33-kube-api-access-4xhng\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.751085 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c49955b5-5145-4939-91e5-280569e18a33-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.751093 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn2sg\" (UniqueName: \"kubernetes.io/projected/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-kube-api-access-dn2sg\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: W0202 07:05:51.751158 4842 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89/volumes/kubernetes.io~secret/combined-ca-bundle Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.751167 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" (UID: "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.753820 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0083ea44-21b0-492b-971b-671241ff8abc" (UID: "0083ea44-21b0-492b-971b-671241ff8abc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.760397 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" (UID: "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.777656 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0083ea44-21b0-492b-971b-671241ff8abc" (UID: "0083ea44-21b0-492b-971b-671241ff8abc"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.779207 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-config-data" (OuterVolumeSpecName: "config-data") pod "0083ea44-21b0-492b-971b-671241ff8abc" (UID: "0083ea44-21b0-492b-971b-671241ff8abc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.784207 4842 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.786771 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-config-data" (OuterVolumeSpecName: "config-data") pod "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" (UID: "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.808169 4842 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.852396 4842 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.852427 4842 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.852439 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.852449 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.852457 4842 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.852465 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.852473 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.852480 4842 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.951811 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0083ea44-21b0-492b-971b-671241ff8abc","Type":"ContainerDied","Data":"8c23fbb0fff0a16501dd8fc713b53a51e1c6260cd6f5e5446454a32930538b9a"} Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.951859 4842 scope.go:117] "RemoveContainer" containerID="2fd96f80d20d678e2e8cc672e30a0503d912638602ef248f0350d2eed7a5acda" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.951959 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.964362 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rpkx6" event={"ID":"c49955b5-5145-4939-91e5-280569e18a33","Type":"ContainerDied","Data":"08a767625ea93aec62299911058dda75d17c5c29e2b78dca21a6a44b37d4a3ec"} Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.964394 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08a767625ea93aec62299911058dda75d17c5c29e2b78dca21a6a44b37d4a3ec" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.964453 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rpkx6" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.967664 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89","Type":"ContainerDied","Data":"75b912a951245e0e56c8a52eef30076143aeede0c081fb4651fe4e34d2509d66"} Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.967729 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.983124 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.011522 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.048322 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:05:52 crc kubenswrapper[4842]: E0202 07:05:52.048739 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" containerName="glance-httpd" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.048757 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" containerName="glance-httpd" Feb 02 07:05:52 crc kubenswrapper[4842]: E0202 07:05:52.048779 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" containerName="glance-log" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.048785 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" containerName="glance-log" Feb 02 07:05:52 crc kubenswrapper[4842]: E0202 07:05:52.048799 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0083ea44-21b0-492b-971b-671241ff8abc" containerName="glance-httpd" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.048805 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0083ea44-21b0-492b-971b-671241ff8abc" containerName="glance-httpd" Feb 02 07:05:52 crc kubenswrapper[4842]: E0202 07:05:52.048820 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0083ea44-21b0-492b-971b-671241ff8abc" containerName="glance-log" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.048826 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0083ea44-21b0-492b-971b-671241ff8abc" containerName="glance-log" Feb 02 07:05:52 crc kubenswrapper[4842]: E0202 07:05:52.048832 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c49955b5-5145-4939-91e5-280569e18a33" containerName="neutron-db-sync" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.048840 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c49955b5-5145-4939-91e5-280569e18a33" containerName="neutron-db-sync" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.048989 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" containerName="glance-log" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.049005 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="0083ea44-21b0-492b-971b-671241ff8abc" containerName="glance-log" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.049016 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="c49955b5-5145-4939-91e5-280569e18a33" containerName="neutron-db-sync" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.049027 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="0083ea44-21b0-492b-971b-671241ff8abc" containerName="glance-httpd" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.049036 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" containerName="glance-httpd" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.050008 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.054084 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.055531 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.055645 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fpq5h" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.055760 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.055952 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.060299 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.066753 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.072724 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.074205 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.076238 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.077452 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.078408 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.164143 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09febcea-8bf3-43b8-b6ff-ae8a0e445519-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.164242 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/74fb1197-2202-4b15-a858-05dd736a1a26-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.164280 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-config-data\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.164304 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.164485 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-scripts\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.164532 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7x95\" (UniqueName: \"kubernetes.io/projected/09febcea-8bf3-43b8-b6ff-ae8a0e445519-kube-api-access-m7x95\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.164559 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.164616 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74fb1197-2202-4b15-a858-05dd736a1a26-logs\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.164636 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-scripts\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.164717 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09febcea-8bf3-43b8-b6ff-ae8a0e445519-logs\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.164769 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-config-data\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.164853 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.164912 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.164931 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sx9t\" (UniqueName: \"kubernetes.io/projected/74fb1197-2202-4b15-a858-05dd736a1a26-kube-api-access-9sx9t\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.165000 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.165305 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.267730 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.267790 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.267809 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sx9t\" (UniqueName: \"kubernetes.io/projected/74fb1197-2202-4b15-a858-05dd736a1a26-kube-api-access-9sx9t\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.267838 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.267858 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.267889 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09febcea-8bf3-43b8-b6ff-ae8a0e445519-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.267908 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/74fb1197-2202-4b15-a858-05dd736a1a26-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.267931 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-config-data\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.267980 4842 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.268011 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.268062 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-scripts\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.268080 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7x95\" (UniqueName: \"kubernetes.io/projected/09febcea-8bf3-43b8-b6ff-ae8a0e445519-kube-api-access-m7x95\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.268130 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.268158 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-scripts\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.268172 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74fb1197-2202-4b15-a858-05dd736a1a26-logs\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.268203 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09febcea-8bf3-43b8-b6ff-ae8a0e445519-logs\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.268248 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-config-data\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.268583 4842 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.269024 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/74fb1197-2202-4b15-a858-05dd736a1a26-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.269098 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09febcea-8bf3-43b8-b6ff-ae8a0e445519-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.269816 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74fb1197-2202-4b15-a858-05dd736a1a26-logs\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.270292 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09febcea-8bf3-43b8-b6ff-ae8a0e445519-logs\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.277590 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.277899 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-config-data\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.278797 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-config-data\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.280101 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.295305 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-scripts\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.296071 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-scripts\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.296242 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.298472 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.300725 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7x95\" (UniqueName: \"kubernetes.io/projected/09febcea-8bf3-43b8-b6ff-ae8a0e445519-kube-api-access-m7x95\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.305358 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sx9t\" (UniqueName: \"kubernetes.io/projected/74fb1197-2202-4b15-a858-05dd736a1a26-kube-api-access-9sx9t\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.334259 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.344158 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.369679 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.387578 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.814443 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-jsqpk"] Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.816351 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.833472 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-jsqpk"] Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.881346 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.881473 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.881523 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.881586 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-dns-svc\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.882012 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5wjq\" (UniqueName: \"kubernetes.io/projected/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-kube-api-access-f5wjq\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.882359 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-config\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.950879 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7b469b995b-npwfd"] Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.952557 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.958130 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.958451 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qlr5t" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.958682 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.958773 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.963862 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b469b995b-npwfd"] Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.983618 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-dns-svc\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.983686 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5wjq\" (UniqueName: \"kubernetes.io/projected/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-kube-api-access-f5wjq\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.983748 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-config\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.983802 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.983829 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.983851 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.984635 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-config\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.984736 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.984855 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.984893 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.984642 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-dns-svc\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.002157 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5wjq\" (UniqueName: \"kubernetes.io/projected/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-kube-api-access-f5wjq\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.085285 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-config\") pod \"neutron-7b469b995b-npwfd\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.085355 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b482\" (UniqueName: \"kubernetes.io/projected/a18aba57-b830-47d3-9b18-8946414fdd1d-kube-api-access-2b482\") pod \"neutron-7b469b995b-npwfd\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.085385 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-httpd-config\") pod \"neutron-7b469b995b-npwfd\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.085632 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-combined-ca-bundle\") pod \"neutron-7b469b995b-npwfd\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.085855 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-ovndb-tls-certs\") pod \"neutron-7b469b995b-npwfd\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.139576 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.187555 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-ovndb-tls-certs\") pod \"neutron-7b469b995b-npwfd\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.187604 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-config\") pod \"neutron-7b469b995b-npwfd\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.187647 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b482\" (UniqueName: \"kubernetes.io/projected/a18aba57-b830-47d3-9b18-8946414fdd1d-kube-api-access-2b482\") pod \"neutron-7b469b995b-npwfd\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.187675 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-httpd-config\") pod \"neutron-7b469b995b-npwfd\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.187732 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-combined-ca-bundle\") pod \"neutron-7b469b995b-npwfd\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.192717 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-ovndb-tls-certs\") pod \"neutron-7b469b995b-npwfd\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.194321 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-httpd-config\") pod \"neutron-7b469b995b-npwfd\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.198456 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-combined-ca-bundle\") pod \"neutron-7b469b995b-npwfd\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.201083 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-config\") pod \"neutron-7b469b995b-npwfd\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.203338 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b482\" (UniqueName: \"kubernetes.io/projected/a18aba57-b830-47d3-9b18-8946414fdd1d-kube-api-access-2b482\") pod \"neutron-7b469b995b-npwfd\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.275671 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:53 crc kubenswrapper[4842]: E0202 07:05:53.284108 4842 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49" Feb 02 07:05:53 crc kubenswrapper[4842]: E0202 07:05:53.284339 4842 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v4nz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-phj68_openstack(d9f1c72e-953b-45ba-ba69-c7574f82e8ad): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 07:05:53 crc kubenswrapper[4842]: E0202 07:05:53.285554 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-phj68" podUID="d9f1c72e-953b-45ba-ba69-c7574f82e8ad" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.297486 4842 scope.go:117] "RemoveContainer" containerID="ccde2cd433c74600bcdce93601254d9511293f06a63ab6132e87513d3754c1e9" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.357303 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.400927 4842 scope.go:117] "RemoveContainer" containerID="c5982122d3335d8f8af9afed233b6885e136dd6acfc9481bba66caad8b099e8d" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.450650 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0083ea44-21b0-492b-971b-671241ff8abc" path="/var/lib/kubelet/pods/0083ea44-21b0-492b-971b-671241ff8abc/volumes" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.452590 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" path="/var/lib/kubelet/pods/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89/volumes" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.469566 4842 scope.go:117] "RemoveContainer" containerID="d2517508f58a8b7c4c13459a97cc7ab9e10a897e173d407ff1912286e20ae247" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.493475 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-ovsdbserver-sb\") pod \"e793f6a1-ed49-496a-af57-84d696daf728\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.493540 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-dns-svc\") pod \"e793f6a1-ed49-496a-af57-84d696daf728\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.493659 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dmt2\" (UniqueName: \"kubernetes.io/projected/e793f6a1-ed49-496a-af57-84d696daf728-kube-api-access-2dmt2\") pod \"e793f6a1-ed49-496a-af57-84d696daf728\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.493717 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-config\") pod \"e793f6a1-ed49-496a-af57-84d696daf728\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.493790 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-ovsdbserver-nb\") pod \"e793f6a1-ed49-496a-af57-84d696daf728\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.493811 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-dns-swift-storage-0\") pod \"e793f6a1-ed49-496a-af57-84d696daf728\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.505166 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e793f6a1-ed49-496a-af57-84d696daf728-kube-api-access-2dmt2" (OuterVolumeSpecName: "kube-api-access-2dmt2") pod "e793f6a1-ed49-496a-af57-84d696daf728" (UID: "e793f6a1-ed49-496a-af57-84d696daf728"). InnerVolumeSpecName "kube-api-access-2dmt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.553344 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e793f6a1-ed49-496a-af57-84d696daf728" (UID: "e793f6a1-ed49-496a-af57-84d696daf728"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.567586 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e793f6a1-ed49-496a-af57-84d696daf728" (UID: "e793f6a1-ed49-496a-af57-84d696daf728"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.594668 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e793f6a1-ed49-496a-af57-84d696daf728" (UID: "e793f6a1-ed49-496a-af57-84d696daf728"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.595136 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-ovsdbserver-nb\") pod \"e793f6a1-ed49-496a-af57-84d696daf728\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " Feb 02 07:05:53 crc kubenswrapper[4842]: W0202 07:05:53.595308 4842 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/e793f6a1-ed49-496a-af57-84d696daf728/volumes/kubernetes.io~configmap/ovsdbserver-nb Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.595343 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e793f6a1-ed49-496a-af57-84d696daf728" (UID: "e793f6a1-ed49-496a-af57-84d696daf728"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.595788 4842 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.595804 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dmt2\" (UniqueName: \"kubernetes.io/projected/e793f6a1-ed49-496a-af57-84d696daf728-kube-api-access-2dmt2\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.595817 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.595826 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.596384 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e793f6a1-ed49-496a-af57-84d696daf728" (UID: "e793f6a1-ed49-496a-af57-84d696daf728"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.606447 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-config" (OuterVolumeSpecName: "config") pod "e793f6a1-ed49-496a-af57-84d696daf728" (UID: "e793f6a1-ed49-496a-af57-84d696daf728"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.697345 4842 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.697372 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.710802 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xh7mg"] Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.860738 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-jsqpk"] Feb 02 07:05:53 crc kubenswrapper[4842]: W0202 07:05:53.959868 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09febcea_8bf3_43b8_b6ff_ae8a0e445519.slice/crio-a5ef0c57463087c53e29eaaeb479b34c51cb5e6f894ab3af4029762d8f230dca WatchSource:0}: Error finding container a5ef0c57463087c53e29eaaeb479b34c51cb5e6f894ab3af4029762d8f230dca: Status 404 returned error can't find the container with id a5ef0c57463087c53e29eaaeb479b34c51cb5e6f894ab3af4029762d8f230dca Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.960304 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:05:54 crc kubenswrapper[4842]: I0202 07:05:54.001330 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09febcea-8bf3-43b8-b6ff-ae8a0e445519","Type":"ContainerStarted","Data":"a5ef0c57463087c53e29eaaeb479b34c51cb5e6f894ab3af4029762d8f230dca"} Feb 02 07:05:54 crc kubenswrapper[4842]: I0202 07:05:54.004365 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-sjstk" event={"ID":"80249ec8-3d5a-4020-bed2-83b8ecd32ab9","Type":"ContainerStarted","Data":"c9da43fb971a5ef2a720b6588e511324cbe1b669ca26172de540c2c1051786f8"} Feb 02 07:05:54 crc kubenswrapper[4842]: I0202 07:05:54.007363 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7aab5ec-829b-42dd-89db-74e28ab9346e","Type":"ContainerStarted","Data":"2f1f71359696d01a5862009ba293a284a700d2d113c3d648dd2fd55ef0a71132"} Feb 02 07:05:54 crc kubenswrapper[4842]: I0202 07:05:54.020992 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-sjstk" podStartSLOduration=2.606365615 podStartE2EDuration="26.02097611s" podCreationTimestamp="2026-02-02 07:05:28 +0000 UTC" firstStartedPulling="2026-02-02 07:05:29.83243625 +0000 UTC m=+1155.209704162" lastFinishedPulling="2026-02-02 07:05:53.247046745 +0000 UTC m=+1178.624314657" observedRunningTime="2026-02-02 07:05:54.018242283 +0000 UTC m=+1179.395510205" watchObservedRunningTime="2026-02-02 07:05:54.02097611 +0000 UTC m=+1179.398244022" Feb 02 07:05:54 crc kubenswrapper[4842]: I0202 07:05:54.022803 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xh7mg" event={"ID":"226a55ec-a7c1-4c34-953c-bb4e549b0fc5","Type":"ContainerStarted","Data":"1c28118337b87470e336f30ccbda4bc135a7ba7f7cf6293ce8d7b2e21bac07df"} Feb 02 07:05:54 crc kubenswrapper[4842]: I0202 07:05:54.032153 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" event={"ID":"8c0bd1b2-3ffe-443f-b632-b44ed96afc30","Type":"ContainerStarted","Data":"cce78954b1aa2e246ca2d16f8b3a27b68612df254d83dcbe0635ca9b3466aaa0"} Feb 02 07:05:54 crc kubenswrapper[4842]: I0202 07:05:54.035656 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2ddsf" event={"ID":"fff8a308-89ab-409f-9053-6a363794df83","Type":"ContainerStarted","Data":"5828541a319e15b9a24397a64ce914d508fb08442c48731c2790845a873ff2cb"} Feb 02 07:05:54 crc kubenswrapper[4842]: I0202 07:05:54.072122 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:54 crc kubenswrapper[4842]: I0202 07:05:54.073286 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" event={"ID":"e793f6a1-ed49-496a-af57-84d696daf728","Type":"ContainerDied","Data":"b3ac1bf771ea13c21ef3016b99265dd8b3157a19cb4d0bcd95a7fc3cee59344d"} Feb 02 07:05:54 crc kubenswrapper[4842]: I0202 07:05:54.073344 4842 scope.go:117] "RemoveContainer" containerID="b3a7c436e2e8d2b98b1b382d46734ec10fcb3fb8ee566aaba25f0dda55dc5702" Feb 02 07:05:54 crc kubenswrapper[4842]: I0202 07:05:54.077649 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-2ddsf" podStartSLOduration=2.781064787 podStartE2EDuration="26.077628395s" podCreationTimestamp="2026-02-02 07:05:28 +0000 UTC" firstStartedPulling="2026-02-02 07:05:29.999822383 +0000 UTC m=+1155.377090295" lastFinishedPulling="2026-02-02 07:05:53.296385991 +0000 UTC m=+1178.673653903" observedRunningTime="2026-02-02 07:05:54.052330212 +0000 UTC m=+1179.429598124" watchObservedRunningTime="2026-02-02 07:05:54.077628395 +0000 UTC m=+1179.454896307" Feb 02 07:05:54 crc kubenswrapper[4842]: E0202 07:05:54.082190 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49\\\"\"" pod="openstack/cinder-db-sync-phj68" podUID="d9f1c72e-953b-45ba-ba69-c7574f82e8ad" Feb 02 07:05:54 crc kubenswrapper[4842]: I0202 07:05:54.092441 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:05:54 crc kubenswrapper[4842]: W0202 07:05:54.181819 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74fb1197_2202_4b15_a858_05dd736a1a26.slice/crio-c3a9d9eee3d9319f1e0b533f2cb62666947fc026870c7a05529e2c7e13ac265d WatchSource:0}: Error finding container c3a9d9eee3d9319f1e0b533f2cb62666947fc026870c7a05529e2c7e13ac265d: Status 404 returned error can't find the container with id c3a9d9eee3d9319f1e0b533f2cb62666947fc026870c7a05529e2c7e13ac265d Feb 02 07:05:54 crc kubenswrapper[4842]: I0202 07:05:54.195298 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-h4x5j"] Feb 02 07:05:54 crc kubenswrapper[4842]: I0202 07:05:54.209487 4842 scope.go:117] "RemoveContainer" containerID="dca3dac891364e01eb6e12794cb5bb79081189c188f045ba72387b730d26feaa" Feb 02 07:05:54 crc kubenswrapper[4842]: I0202 07:05:54.231008 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-h4x5j"] Feb 02 07:05:54 crc kubenswrapper[4842]: I0202 07:05:54.252787 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b469b995b-npwfd"] Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.080117 4842 generic.go:334] "Generic (PLEG): container finished" podID="8c0bd1b2-3ffe-443f-b632-b44ed96afc30" containerID="82eafdb535c05f6b04556ae1baee492e7492a5e0fe1080d56e7f4182f6ac68b9" exitCode=0 Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.080644 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" event={"ID":"8c0bd1b2-3ffe-443f-b632-b44ed96afc30","Type":"ContainerDied","Data":"82eafdb535c05f6b04556ae1baee492e7492a5e0fe1080d56e7f4182f6ac68b9"} Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.090553 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xh7mg" event={"ID":"226a55ec-a7c1-4c34-953c-bb4e549b0fc5","Type":"ContainerStarted","Data":"39eb208f6af2deea706cedebd930cca14ea7a25cb9ca73a57ad9dc64e6023a18"} Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.113928 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"74fb1197-2202-4b15-a858-05dd736a1a26","Type":"ContainerStarted","Data":"17b5094d456c9e7ac0aee7bc704529e5e3cdad0cd41064b1ee27f8f438f68541"} Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.113970 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"74fb1197-2202-4b15-a858-05dd736a1a26","Type":"ContainerStarted","Data":"c3a9d9eee3d9319f1e0b533f2cb62666947fc026870c7a05529e2c7e13ac265d"} Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.126724 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09febcea-8bf3-43b8-b6ff-ae8a0e445519","Type":"ContainerStarted","Data":"5ef15884271c02db7ac2aacfcafc7eda559d7d1e5207b1cc74589dab6d9494ce"} Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.130254 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b469b995b-npwfd" event={"ID":"a18aba57-b830-47d3-9b18-8946414fdd1d","Type":"ContainerStarted","Data":"f8f9e0a8b64ae08b996a6ff20de4cb61c2fe7c362caaa42c329de676a9077b38"} Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.130284 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b469b995b-npwfd" event={"ID":"a18aba57-b830-47d3-9b18-8946414fdd1d","Type":"ContainerStarted","Data":"6747e535436e2bdd0c46d5273f8b5a7d29b3c3f7226e94896a48a5bfcdb6a2d9"} Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.130297 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.130308 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b469b995b-npwfd" event={"ID":"a18aba57-b830-47d3-9b18-8946414fdd1d","Type":"ContainerStarted","Data":"c685a8dc8410d6a7a79b5205dd3ff23339631326844f2a5b84578d841706238e"} Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.139461 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xh7mg" podStartSLOduration=15.139441171 podStartE2EDuration="15.139441171s" podCreationTimestamp="2026-02-02 07:05:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:55.122458983 +0000 UTC m=+1180.499726895" watchObservedRunningTime="2026-02-02 07:05:55.139441171 +0000 UTC m=+1180.516709083" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.154597 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7b469b995b-npwfd" podStartSLOduration=3.154579634 podStartE2EDuration="3.154579634s" podCreationTimestamp="2026-02-02 07:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:55.14871888 +0000 UTC m=+1180.525986792" watchObservedRunningTime="2026-02-02 07:05:55.154579634 +0000 UTC m=+1180.531847546" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.270647 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6fcc587c45-x7h24"] Feb 02 07:05:55 crc kubenswrapper[4842]: E0202 07:05:55.270989 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e793f6a1-ed49-496a-af57-84d696daf728" containerName="dnsmasq-dns" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.271001 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="e793f6a1-ed49-496a-af57-84d696daf728" containerName="dnsmasq-dns" Feb 02 07:05:55 crc kubenswrapper[4842]: E0202 07:05:55.271012 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e793f6a1-ed49-496a-af57-84d696daf728" containerName="init" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.271017 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="e793f6a1-ed49-496a-af57-84d696daf728" containerName="init" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.271177 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="e793f6a1-ed49-496a-af57-84d696daf728" containerName="dnsmasq-dns" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.279614 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.282730 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.287960 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.290701 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6fcc587c45-x7h24"] Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.334416 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-combined-ca-bundle\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.334678 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g4w4\" (UniqueName: \"kubernetes.io/projected/3aaab28f-fb61-4600-b66f-a485ca345112-kube-api-access-4g4w4\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.334761 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-internal-tls-certs\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.334785 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-httpd-config\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.334807 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-ovndb-tls-certs\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.334855 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-public-tls-certs\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.334891 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-config\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.436276 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-config\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.436344 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-combined-ca-bundle\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.436361 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g4w4\" (UniqueName: \"kubernetes.io/projected/3aaab28f-fb61-4600-b66f-a485ca345112-kube-api-access-4g4w4\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.436499 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-internal-tls-certs\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.436528 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-httpd-config\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.436551 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-ovndb-tls-certs\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.436582 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-public-tls-certs\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.444586 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-ovndb-tls-certs\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.444862 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-config\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.445432 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-httpd-config\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.447955 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-combined-ca-bundle\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.450197 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-public-tls-certs\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.450416 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-internal-tls-certs\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.454802 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g4w4\" (UniqueName: \"kubernetes.io/projected/3aaab28f-fb61-4600-b66f-a485ca345112-kube-api-access-4g4w4\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.457777 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e793f6a1-ed49-496a-af57-84d696daf728" path="/var/lib/kubelet/pods/e793f6a1-ed49-496a-af57-84d696daf728/volumes" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.602263 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:56 crc kubenswrapper[4842]: I0202 07:05:56.175838 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09febcea-8bf3-43b8-b6ff-ae8a0e445519","Type":"ContainerStarted","Data":"8d3926fc2f7172c658b9b2069d4954fc955daf88fa215cbbf56fe1879ccec1b8"} Feb 02 07:05:56 crc kubenswrapper[4842]: I0202 07:05:56.209570 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.209552501 podStartE2EDuration="5.209552501s" podCreationTimestamp="2026-02-02 07:05:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:56.203870811 +0000 UTC m=+1181.581138723" watchObservedRunningTime="2026-02-02 07:05:56.209552501 +0000 UTC m=+1181.586820413" Feb 02 07:05:56 crc kubenswrapper[4842]: I0202 07:05:56.554543 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6fcc587c45-x7h24"] Feb 02 07:05:57 crc kubenswrapper[4842]: I0202 07:05:57.191354 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"74fb1197-2202-4b15-a858-05dd736a1a26","Type":"ContainerStarted","Data":"224fc5852a577215a4a41f26622ee8290bb52c1f1f725cc252747f84a03552e3"} Feb 02 07:05:57 crc kubenswrapper[4842]: I0202 07:05:57.201085 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7aab5ec-829b-42dd-89db-74e28ab9346e","Type":"ContainerStarted","Data":"489c01ede4a0ab782872bdaed559698536c0754fc4c6b18af574f3dd700850cf"} Feb 02 07:05:57 crc kubenswrapper[4842]: I0202 07:05:57.214840 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" event={"ID":"8c0bd1b2-3ffe-443f-b632-b44ed96afc30","Type":"ContainerStarted","Data":"05833980aa0f3fcdb343d056348768c4e89e806dedb21d7281e2de92eb4da550"} Feb 02 07:05:57 crc kubenswrapper[4842]: I0202 07:05:57.215831 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:57 crc kubenswrapper[4842]: I0202 07:05:57.217858 4842 generic.go:334] "Generic (PLEG): container finished" podID="fff8a308-89ab-409f-9053-6a363794df83" containerID="5828541a319e15b9a24397a64ce914d508fb08442c48731c2790845a873ff2cb" exitCode=0 Feb 02 07:05:57 crc kubenswrapper[4842]: I0202 07:05:57.217916 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2ddsf" event={"ID":"fff8a308-89ab-409f-9053-6a363794df83","Type":"ContainerDied","Data":"5828541a319e15b9a24397a64ce914d508fb08442c48731c2790845a873ff2cb"} Feb 02 07:05:57 crc kubenswrapper[4842]: I0202 07:05:57.221540 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fcc587c45-x7h24" event={"ID":"3aaab28f-fb61-4600-b66f-a485ca345112","Type":"ContainerStarted","Data":"ca6552ce5887f06f32bb03e339a3e9124e1fa65f5a80acb32717eb27f56d3775"} Feb 02 07:05:57 crc kubenswrapper[4842]: I0202 07:05:57.221570 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fcc587c45-x7h24" event={"ID":"3aaab28f-fb61-4600-b66f-a485ca345112","Type":"ContainerStarted","Data":"b303529aa7f40b97ddac015c60fbc643d3194166e20eda9000a91d5e375c56d6"} Feb 02 07:05:57 crc kubenswrapper[4842]: I0202 07:05:57.221579 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fcc587c45-x7h24" event={"ID":"3aaab28f-fb61-4600-b66f-a485ca345112","Type":"ContainerStarted","Data":"6baf18e2465586bae82b31b897e8d4dfb75242a3b157fb93fe3a29ff487cbf1b"} Feb 02 07:05:57 crc kubenswrapper[4842]: I0202 07:05:57.221684 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:57 crc kubenswrapper[4842]: I0202 07:05:57.228081 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.22805984 podStartE2EDuration="5.22805984s" podCreationTimestamp="2026-02-02 07:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:57.209533334 +0000 UTC m=+1182.586801266" watchObservedRunningTime="2026-02-02 07:05:57.22805984 +0000 UTC m=+1182.605327752" Feb 02 07:05:57 crc kubenswrapper[4842]: I0202 07:05:57.244136 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" podStartSLOduration=5.244115296 podStartE2EDuration="5.244115296s" podCreationTimestamp="2026-02-02 07:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:57.230527701 +0000 UTC m=+1182.607795613" watchObservedRunningTime="2026-02-02 07:05:57.244115296 +0000 UTC m=+1182.621383208" Feb 02 07:05:58 crc kubenswrapper[4842]: I0202 07:05:58.234270 4842 generic.go:334] "Generic (PLEG): container finished" podID="226a55ec-a7c1-4c34-953c-bb4e549b0fc5" containerID="39eb208f6af2deea706cedebd930cca14ea7a25cb9ca73a57ad9dc64e6023a18" exitCode=0 Feb 02 07:05:58 crc kubenswrapper[4842]: I0202 07:05:58.234351 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xh7mg" event={"ID":"226a55ec-a7c1-4c34-953c-bb4e549b0fc5","Type":"ContainerDied","Data":"39eb208f6af2deea706cedebd930cca14ea7a25cb9ca73a57ad9dc64e6023a18"} Feb 02 07:05:58 crc kubenswrapper[4842]: I0202 07:05:58.237233 4842 generic.go:334] "Generic (PLEG): container finished" podID="80249ec8-3d5a-4020-bed2-83b8ecd32ab9" containerID="c9da43fb971a5ef2a720b6588e511324cbe1b669ca26172de540c2c1051786f8" exitCode=0 Feb 02 07:05:58 crc kubenswrapper[4842]: I0202 07:05:58.237239 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-sjstk" event={"ID":"80249ec8-3d5a-4020-bed2-83b8ecd32ab9","Type":"ContainerDied","Data":"c9da43fb971a5ef2a720b6588e511324cbe1b669ca26172de540c2c1051786f8"} Feb 02 07:05:58 crc kubenswrapper[4842]: I0202 07:05:58.252718 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6fcc587c45-x7h24" podStartSLOduration=3.252700491 podStartE2EDuration="3.252700491s" podCreationTimestamp="2026-02-02 07:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:57.276992136 +0000 UTC m=+1182.654260118" watchObservedRunningTime="2026-02-02 07:05:58.252700491 +0000 UTC m=+1183.629968393" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.169034 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-sjstk" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.174441 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.209669 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2ddsf" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.253804 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-scripts\") pod \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.253907 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-config-data\") pod \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.253935 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-combined-ca-bundle\") pod \"fff8a308-89ab-409f-9053-6a363794df83\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.254436 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h92g4\" (UniqueName: \"kubernetes.io/projected/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-kube-api-access-h92g4\") pod \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.254618 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-scripts\") pod \"fff8a308-89ab-409f-9053-6a363794df83\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.254674 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-fernet-keys\") pod \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.254766 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-672bm\" (UniqueName: \"kubernetes.io/projected/fff8a308-89ab-409f-9053-6a363794df83-kube-api-access-672bm\") pod \"fff8a308-89ab-409f-9053-6a363794df83\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.254827 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nz6l8\" (UniqueName: \"kubernetes.io/projected/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-kube-api-access-nz6l8\") pod \"80249ec8-3d5a-4020-bed2-83b8ecd32ab9\" (UID: \"80249ec8-3d5a-4020-bed2-83b8ecd32ab9\") " Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.254916 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-combined-ca-bundle\") pod \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.254990 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fff8a308-89ab-409f-9053-6a363794df83-logs\") pod \"fff8a308-89ab-409f-9053-6a363794df83\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.255038 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-credential-keys\") pod \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.255069 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-combined-ca-bundle\") pod \"80249ec8-3d5a-4020-bed2-83b8ecd32ab9\" (UID: \"80249ec8-3d5a-4020-bed2-83b8ecd32ab9\") " Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.255108 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-config-data\") pod \"fff8a308-89ab-409f-9053-6a363794df83\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.255145 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-db-sync-config-data\") pod \"80249ec8-3d5a-4020-bed2-83b8ecd32ab9\" (UID: \"80249ec8-3d5a-4020-bed2-83b8ecd32ab9\") " Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.261286 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fff8a308-89ab-409f-9053-6a363794df83-logs" (OuterVolumeSpecName: "logs") pod "fff8a308-89ab-409f-9053-6a363794df83" (UID: "fff8a308-89ab-409f-9053-6a363794df83"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.266116 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2ddsf" event={"ID":"fff8a308-89ab-409f-9053-6a363794df83","Type":"ContainerDied","Data":"7cc030eb3eb4272b409ce92adc2a7190b5a997425fe481081c2cb7830167dd33"} Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.266175 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cc030eb3eb4272b409ce92adc2a7190b5a997425fe481081c2cb7830167dd33" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.266278 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2ddsf" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.267459 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-kube-api-access-nz6l8" (OuterVolumeSpecName: "kube-api-access-nz6l8") pod "80249ec8-3d5a-4020-bed2-83b8ecd32ab9" (UID: "80249ec8-3d5a-4020-bed2-83b8ecd32ab9"). InnerVolumeSpecName "kube-api-access-nz6l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.268306 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xh7mg" event={"ID":"226a55ec-a7c1-4c34-953c-bb4e549b0fc5","Type":"ContainerDied","Data":"1c28118337b87470e336f30ccbda4bc135a7ba7f7cf6293ce8d7b2e21bac07df"} Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.268347 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c28118337b87470e336f30ccbda4bc135a7ba7f7cf6293ce8d7b2e21bac07df" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.268413 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.272537 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-sjstk" event={"ID":"80249ec8-3d5a-4020-bed2-83b8ecd32ab9","Type":"ContainerDied","Data":"cd2d0997e2cc127c80bb06f907a598f4209b55d656a3634a4391e4cc9d674026"} Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.272581 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd2d0997e2cc127c80bb06f907a598f4209b55d656a3634a4391e4cc9d674026" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.272650 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-sjstk" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.272648 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "226a55ec-a7c1-4c34-953c-bb4e549b0fc5" (UID: "226a55ec-a7c1-4c34-953c-bb4e549b0fc5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.284392 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-scripts" (OuterVolumeSpecName: "scripts") pod "226a55ec-a7c1-4c34-953c-bb4e549b0fc5" (UID: "226a55ec-a7c1-4c34-953c-bb4e549b0fc5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.284415 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-scripts" (OuterVolumeSpecName: "scripts") pod "fff8a308-89ab-409f-9053-6a363794df83" (UID: "fff8a308-89ab-409f-9053-6a363794df83"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.284763 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-kube-api-access-h92g4" (OuterVolumeSpecName: "kube-api-access-h92g4") pod "226a55ec-a7c1-4c34-953c-bb4e549b0fc5" (UID: "226a55ec-a7c1-4c34-953c-bb4e549b0fc5"). InnerVolumeSpecName "kube-api-access-h92g4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.288499 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fff8a308-89ab-409f-9053-6a363794df83-kube-api-access-672bm" (OuterVolumeSpecName: "kube-api-access-672bm") pod "fff8a308-89ab-409f-9053-6a363794df83" (UID: "fff8a308-89ab-409f-9053-6a363794df83"). InnerVolumeSpecName "kube-api-access-672bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.288616 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "226a55ec-a7c1-4c34-953c-bb4e549b0fc5" (UID: "226a55ec-a7c1-4c34-953c-bb4e549b0fc5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.288647 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "80249ec8-3d5a-4020-bed2-83b8ecd32ab9" (UID: "80249ec8-3d5a-4020-bed2-83b8ecd32ab9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.293081 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "226a55ec-a7c1-4c34-953c-bb4e549b0fc5" (UID: "226a55ec-a7c1-4c34-953c-bb4e549b0fc5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.301410 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-config-data" (OuterVolumeSpecName: "config-data") pod "fff8a308-89ab-409f-9053-6a363794df83" (UID: "fff8a308-89ab-409f-9053-6a363794df83"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.303642 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-config-data" (OuterVolumeSpecName: "config-data") pod "226a55ec-a7c1-4c34-953c-bb4e549b0fc5" (UID: "226a55ec-a7c1-4c34-953c-bb4e549b0fc5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.328509 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fff8a308-89ab-409f-9053-6a363794df83" (UID: "fff8a308-89ab-409f-9053-6a363794df83"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.332911 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80249ec8-3d5a-4020-bed2-83b8ecd32ab9" (UID: "80249ec8-3d5a-4020-bed2-83b8ecd32ab9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.356914 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.356937 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h92g4\" (UniqueName: \"kubernetes.io/projected/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-kube-api-access-h92g4\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.356947 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.356955 4842 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.356964 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-672bm\" (UniqueName: \"kubernetes.io/projected/fff8a308-89ab-409f-9053-6a363794df83-kube-api-access-672bm\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.356972 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nz6l8\" (UniqueName: \"kubernetes.io/projected/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-kube-api-access-nz6l8\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.356980 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.356987 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fff8a308-89ab-409f-9053-6a363794df83-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.356995 4842 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.357002 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.357010 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.357017 4842 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.357025 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.357034 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.313724 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7aab5ec-829b-42dd-89db-74e28ab9346e","Type":"ContainerStarted","Data":"46a4ec7b1a2bf914002a2bbd86c470d96a9acddcc7f5c8732c24027d3a07b921"} Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.370854 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.370898 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.389424 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.389660 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.397614 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cd7d86b6c-rcdjq"] Feb 02 07:06:02 crc kubenswrapper[4842]: E0202 07:06:02.398019 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="226a55ec-a7c1-4c34-953c-bb4e549b0fc5" containerName="keystone-bootstrap" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.398039 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="226a55ec-a7c1-4c34-953c-bb4e549b0fc5" containerName="keystone-bootstrap" Feb 02 07:06:02 crc kubenswrapper[4842]: E0202 07:06:02.398060 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80249ec8-3d5a-4020-bed2-83b8ecd32ab9" containerName="barbican-db-sync" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.398069 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="80249ec8-3d5a-4020-bed2-83b8ecd32ab9" containerName="barbican-db-sync" Feb 02 07:06:02 crc kubenswrapper[4842]: E0202 07:06:02.398106 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fff8a308-89ab-409f-9053-6a363794df83" containerName="placement-db-sync" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.398115 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="fff8a308-89ab-409f-9053-6a363794df83" containerName="placement-db-sync" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.398351 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="226a55ec-a7c1-4c34-953c-bb4e549b0fc5" containerName="keystone-bootstrap" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.398373 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="fff8a308-89ab-409f-9053-6a363794df83" containerName="placement-db-sync" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.398409 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="80249ec8-3d5a-4020-bed2-83b8ecd32ab9" containerName="barbican-db-sync" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.399016 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.403936 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.404366 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.404580 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6drft" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.404724 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.404848 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.404958 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.425701 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cd7d86b6c-rcdjq"] Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.440476 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-697d496d6b-bz7zg"] Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.441825 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.444795 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.450693 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.450991 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.451292 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.451430 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rf5dt" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.451600 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.457794 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-697d496d6b-bz7zg"] Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.481929 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-credential-keys\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.481981 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-public-tls-certs\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.482015 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-combined-ca-bundle\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.482074 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-internal-tls-certs\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.482096 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-config-data\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.482117 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-457v8\" (UniqueName: \"kubernetes.io/projected/7343dd67-a085-4da9-8d79-f25ea1e20ca6-kube-api-access-457v8\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.482138 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-scripts\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.482164 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-fernet-keys\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.506788 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.521490 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.528414 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.560916 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-69f5f7d66b-p2q6s"] Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.569339 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.580721 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-drtzj" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.580964 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.581119 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.593083 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-internal-tls-certs\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.593166 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-config-data\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.593202 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-457v8\" (UniqueName: \"kubernetes.io/projected/7343dd67-a085-4da9-8d79-f25ea1e20ca6-kube-api-access-457v8\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.593253 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42brm\" (UniqueName: \"kubernetes.io/projected/726c1772-2536-414e-a6ce-9c1437b021d1-kube-api-access-42brm\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.593285 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-scripts\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.593311 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-scripts\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.593403 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/726c1772-2536-414e-a6ce-9c1437b021d1-logs\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.593427 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-fernet-keys\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.593512 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-internal-tls-certs\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.593551 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-credential-keys\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.593619 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-public-tls-certs\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.593708 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-combined-ca-bundle\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.593729 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-combined-ca-bundle\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.593787 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-public-tls-certs\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.593838 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-config-data\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.600148 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-credential-keys\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.601159 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-fernet-keys\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.601886 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-cdc46cdfc-px7hq"] Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.612401 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-config-data\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.613120 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.624014 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.626306 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-combined-ca-bundle\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.629321 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-public-tls-certs\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.629390 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-69f5f7d66b-p2q6s"] Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.630289 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-internal-tls-certs\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.633017 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-scripts\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.676354 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-cdc46cdfc-px7hq"] Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.685093 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-457v8\" (UniqueName: \"kubernetes.io/projected/7343dd67-a085-4da9-8d79-f25ea1e20ca6-kube-api-access-457v8\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705436 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-combined-ca-bundle\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705505 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6zc7\" (UniqueName: \"kubernetes.io/projected/948096a2-7fcf-4cb1-90da-90f3edbfd95b-kube-api-access-l6zc7\") pod \"barbican-keystone-listener-69f5f7d66b-p2q6s\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705526 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-public-tls-certs\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705542 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d385ecd-3bd8-41cf-814b-6409c426dc80-logs\") pod \"barbican-worker-cdc46cdfc-px7hq\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705568 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-config-data\") pod \"barbican-worker-cdc46cdfc-px7hq\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705593 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-config-data\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705635 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-combined-ca-bundle\") pod \"barbican-worker-cdc46cdfc-px7hq\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705693 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42brm\" (UniqueName: \"kubernetes.io/projected/726c1772-2536-414e-a6ce-9c1437b021d1-kube-api-access-42brm\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705717 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-scripts\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705746 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/726c1772-2536-414e-a6ce-9c1437b021d1-logs\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705771 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b767s\" (UniqueName: \"kubernetes.io/projected/0d385ecd-3bd8-41cf-814b-6409c426dc80-kube-api-access-b767s\") pod \"barbican-worker-cdc46cdfc-px7hq\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705798 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-config-data-custom\") pod \"barbican-worker-cdc46cdfc-px7hq\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705818 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-config-data\") pod \"barbican-keystone-listener-69f5f7d66b-p2q6s\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705832 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948096a2-7fcf-4cb1-90da-90f3edbfd95b-logs\") pod \"barbican-keystone-listener-69f5f7d66b-p2q6s\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705849 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-internal-tls-certs\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705873 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-combined-ca-bundle\") pod \"barbican-keystone-listener-69f5f7d66b-p2q6s\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705922 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-config-data-custom\") pod \"barbican-keystone-listener-69f5f7d66b-p2q6s\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.707999 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/726c1772-2536-414e-a6ce-9c1437b021d1-logs\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.711564 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-combined-ca-bundle\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.715747 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-public-tls-certs\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.715787 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-config-data\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.728606 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.747602 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-internal-tls-certs\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.749565 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-scripts\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.754561 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42brm\" (UniqueName: \"kubernetes.io/projected/726c1772-2536-414e-a6ce-9c1437b021d1-kube-api-access-42brm\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.786080 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.808696 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6zc7\" (UniqueName: \"kubernetes.io/projected/948096a2-7fcf-4cb1-90da-90f3edbfd95b-kube-api-access-l6zc7\") pod \"barbican-keystone-listener-69f5f7d66b-p2q6s\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.808734 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d385ecd-3bd8-41cf-814b-6409c426dc80-logs\") pod \"barbican-worker-cdc46cdfc-px7hq\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.808761 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-config-data\") pod \"barbican-worker-cdc46cdfc-px7hq\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.808798 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-combined-ca-bundle\") pod \"barbican-worker-cdc46cdfc-px7hq\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.808856 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b767s\" (UniqueName: \"kubernetes.io/projected/0d385ecd-3bd8-41cf-814b-6409c426dc80-kube-api-access-b767s\") pod \"barbican-worker-cdc46cdfc-px7hq\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.808877 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-config-data-custom\") pod \"barbican-worker-cdc46cdfc-px7hq\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.808895 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-config-data\") pod \"barbican-keystone-listener-69f5f7d66b-p2q6s\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.808908 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948096a2-7fcf-4cb1-90da-90f3edbfd95b-logs\") pod \"barbican-keystone-listener-69f5f7d66b-p2q6s\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.808926 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-combined-ca-bundle\") pod \"barbican-keystone-listener-69f5f7d66b-p2q6s\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.808955 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-config-data-custom\") pod \"barbican-keystone-listener-69f5f7d66b-p2q6s\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.812864 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d385ecd-3bd8-41cf-814b-6409c426dc80-logs\") pod \"barbican-worker-cdc46cdfc-px7hq\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.817507 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-combined-ca-bundle\") pod \"barbican-worker-cdc46cdfc-px7hq\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.818408 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-config-data\") pod \"barbican-worker-cdc46cdfc-px7hq\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.820538 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948096a2-7fcf-4cb1-90da-90f3edbfd95b-logs\") pod \"barbican-keystone-listener-69f5f7d66b-p2q6s\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.826023 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-combined-ca-bundle\") pod \"barbican-keystone-listener-69f5f7d66b-p2q6s\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.829909 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-config-data-custom\") pod \"barbican-keystone-listener-69f5f7d66b-p2q6s\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.829973 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-jsqpk"] Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.830184 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" podUID="8c0bd1b2-3ffe-443f-b632-b44ed96afc30" containerName="dnsmasq-dns" containerID="cri-o://05833980aa0f3fcdb343d056348768c4e89e806dedb21d7281e2de92eb4da550" gracePeriod=10 Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.834011 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-config-data-custom\") pod \"barbican-worker-cdc46cdfc-px7hq\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.835178 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-config-data\") pod \"barbican-keystone-listener-69f5f7d66b-p2q6s\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.836554 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.847751 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b767s\" (UniqueName: \"kubernetes.io/projected/0d385ecd-3bd8-41cf-814b-6409c426dc80-kube-api-access-b767s\") pod \"barbican-worker-cdc46cdfc-px7hq\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.850669 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-hdddb"] Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.851741 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6zc7\" (UniqueName: \"kubernetes.io/projected/948096a2-7fcf-4cb1-90da-90f3edbfd95b-kube-api-access-l6zc7\") pod \"barbican-keystone-listener-69f5f7d66b-p2q6s\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.852167 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.870483 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-hdddb"] Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.925665 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-77c4859bf4-qzmpm"] Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.927120 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.934346 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-57cc9f4749-jxzrq"] Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.936201 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.948958 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-77c4859bf4-qzmpm"] Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.971537 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-57cc9f4749-jxzrq"] Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.985619 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-578f976b4-mj2qx"] Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.001827 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.003428 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.010126 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5b5c67fdbd-zsx96"] Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.012281 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.020556 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-combined-ca-bundle\") pod \"barbican-worker-57cc9f4749-jxzrq\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.020696 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-dns-swift-storage-0\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.020873 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lfws\" (UniqueName: \"kubernetes.io/projected/679e6e39-029a-452e-a375-bf0b937e3fbe-kube-api-access-9lfws\") pod \"barbican-keystone-listener-77c4859bf4-qzmpm\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.020982 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-config-data-custom\") pod \"barbican-worker-57cc9f4749-jxzrq\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.021098 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r84hx\" (UniqueName: \"kubernetes.io/projected/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-kube-api-access-r84hx\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.021212 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/679e6e39-029a-452e-a375-bf0b937e3fbe-logs\") pod \"barbican-keystone-listener-77c4859bf4-qzmpm\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.021355 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-config-data\") pod \"barbican-worker-57cc9f4749-jxzrq\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.021506 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-ovsdbserver-nb\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.021634 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-ovsdbserver-sb\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.022526 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-combined-ca-bundle\") pod \"barbican-keystone-listener-77c4859bf4-qzmpm\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.022635 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-dns-svc\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.022737 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkmc9\" (UniqueName: \"kubernetes.io/projected/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-kube-api-access-rkmc9\") pod \"barbican-worker-57cc9f4749-jxzrq\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.022841 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-config\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.022928 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-config-data\") pod \"barbican-keystone-listener-77c4859bf4-qzmpm\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.023012 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-config-data-custom\") pod \"barbican-keystone-listener-77c4859bf4-qzmpm\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.023170 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-logs\") pod \"barbican-worker-57cc9f4749-jxzrq\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.032705 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-578f976b4-mj2qx"] Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.032823 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.038988 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5b5c67fdbd-zsx96"] Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.123646 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124629 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-dns-svc\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124663 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-combined-ca-bundle\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124682 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkmc9\" (UniqueName: \"kubernetes.io/projected/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-kube-api-access-rkmc9\") pod \"barbican-worker-57cc9f4749-jxzrq\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124699 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-config-data\") pod \"barbican-api-578f976b4-mj2qx\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124718 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-logs\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124735 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-config\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124751 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-config-data-custom\") pod \"barbican-api-578f976b4-mj2qx\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124769 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-config-data\") pod \"barbican-keystone-listener-77c4859bf4-qzmpm\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124786 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-config-data-custom\") pod \"barbican-keystone-listener-77c4859bf4-qzmpm\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124808 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-combined-ca-bundle\") pod \"barbican-api-578f976b4-mj2qx\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124841 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-logs\") pod \"barbican-worker-57cc9f4749-jxzrq\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124859 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-internal-tls-certs\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124888 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac50621f-67cd-441d-99ea-6839f7f3b556-logs\") pod \"barbican-api-578f976b4-mj2qx\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124909 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-combined-ca-bundle\") pod \"barbican-worker-57cc9f4749-jxzrq\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124924 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-dns-swift-storage-0\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124939 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lfws\" (UniqueName: \"kubernetes.io/projected/679e6e39-029a-452e-a375-bf0b937e3fbe-kube-api-access-9lfws\") pod \"barbican-keystone-listener-77c4859bf4-qzmpm\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124956 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-public-tls-certs\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124973 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-config-data-custom\") pod \"barbican-worker-57cc9f4749-jxzrq\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124991 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r84hx\" (UniqueName: \"kubernetes.io/projected/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-kube-api-access-r84hx\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.125011 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/679e6e39-029a-452e-a375-bf0b937e3fbe-logs\") pod \"barbican-keystone-listener-77c4859bf4-qzmpm\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.125034 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-scripts\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.125051 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-config-data\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.125068 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-config-data\") pod \"barbican-worker-57cc9f4749-jxzrq\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.125094 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5nxt\" (UniqueName: \"kubernetes.io/projected/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-kube-api-access-d5nxt\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.125115 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-ovsdbserver-nb\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.125140 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs4k8\" (UniqueName: \"kubernetes.io/projected/ac50621f-67cd-441d-99ea-6839f7f3b556-kube-api-access-xs4k8\") pod \"barbican-api-578f976b4-mj2qx\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.125157 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-ovsdbserver-sb\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.125173 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-combined-ca-bundle\") pod \"barbican-keystone-listener-77c4859bf4-qzmpm\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.128867 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/679e6e39-029a-452e-a375-bf0b937e3fbe-logs\") pod \"barbican-keystone-listener-77c4859bf4-qzmpm\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.129460 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-config\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.129542 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-ovsdbserver-nb\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.130088 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-dns-svc\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.130098 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-dns-swift-storage-0\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.130607 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-logs\") pod \"barbican-worker-57cc9f4749-jxzrq\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.130803 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-ovsdbserver-sb\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.135557 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-combined-ca-bundle\") pod \"barbican-worker-57cc9f4749-jxzrq\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.138021 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-config-data-custom\") pod \"barbican-keystone-listener-77c4859bf4-qzmpm\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.140181 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-config-data\") pod \"barbican-keystone-listener-77c4859bf4-qzmpm\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.142439 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" podUID="8c0bd1b2-3ffe-443f-b632-b44ed96afc30" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.150:5353: connect: connection refused" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.144714 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-config-data\") pod \"barbican-worker-57cc9f4749-jxzrq\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.150953 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-combined-ca-bundle\") pod \"barbican-keystone-listener-77c4859bf4-qzmpm\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.151472 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-config-data-custom\") pod \"barbican-worker-57cc9f4749-jxzrq\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.153021 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lfws\" (UniqueName: \"kubernetes.io/projected/679e6e39-029a-452e-a375-bf0b937e3fbe-kube-api-access-9lfws\") pod \"barbican-keystone-listener-77c4859bf4-qzmpm\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.161764 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkmc9\" (UniqueName: \"kubernetes.io/projected/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-kube-api-access-rkmc9\") pod \"barbican-worker-57cc9f4749-jxzrq\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.176972 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r84hx\" (UniqueName: \"kubernetes.io/projected/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-kube-api-access-r84hx\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.193137 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.201836 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cd7d86b6c-rcdjq"] Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.226560 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-scripts\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.226595 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-config-data\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.226632 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5nxt\" (UniqueName: \"kubernetes.io/projected/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-kube-api-access-d5nxt\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.226669 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs4k8\" (UniqueName: \"kubernetes.io/projected/ac50621f-67cd-441d-99ea-6839f7f3b556-kube-api-access-xs4k8\") pod \"barbican-api-578f976b4-mj2qx\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.226699 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-combined-ca-bundle\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.226716 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-config-data\") pod \"barbican-api-578f976b4-mj2qx\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.226735 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-logs\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.226753 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-config-data-custom\") pod \"barbican-api-578f976b4-mj2qx\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.226778 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-combined-ca-bundle\") pod \"barbican-api-578f976b4-mj2qx\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.226815 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-internal-tls-certs\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.226844 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac50621f-67cd-441d-99ea-6839f7f3b556-logs\") pod \"barbican-api-578f976b4-mj2qx\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.226866 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-public-tls-certs\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.231638 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-logs\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.231767 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-config-data\") pod \"barbican-api-578f976b4-mj2qx\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.231923 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac50621f-67cd-441d-99ea-6839f7f3b556-logs\") pod \"barbican-api-578f976b4-mj2qx\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.231937 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-public-tls-certs\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.235429 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-combined-ca-bundle\") pod \"barbican-api-578f976b4-mj2qx\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.235862 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-scripts\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.236019 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-config-data-custom\") pod \"barbican-api-578f976b4-mj2qx\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.237423 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-internal-tls-certs\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.238075 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-config-data\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.240974 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-combined-ca-bundle\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.248236 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5nxt\" (UniqueName: \"kubernetes.io/projected/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-kube-api-access-d5nxt\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.251666 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs4k8\" (UniqueName: \"kubernetes.io/projected/ac50621f-67cd-441d-99ea-6839f7f3b556-kube-api-access-xs4k8\") pod \"barbican-api-578f976b4-mj2qx\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.332510 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.336422 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cd7d86b6c-rcdjq" event={"ID":"7343dd67-a085-4da9-8d79-f25ea1e20ca6","Type":"ContainerStarted","Data":"0a8707912ffa5b95a33e852a86d3ad76fb5ed5f7a33153be252e8d6c15cbbb8d"} Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.346808 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.350687 4842 generic.go:334] "Generic (PLEG): container finished" podID="8c0bd1b2-3ffe-443f-b632-b44ed96afc30" containerID="05833980aa0f3fcdb343d056348768c4e89e806dedb21d7281e2de92eb4da550" exitCode=0 Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.351662 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" event={"ID":"8c0bd1b2-3ffe-443f-b632-b44ed96afc30","Type":"ContainerDied","Data":"05833980aa0f3fcdb343d056348768c4e89e806dedb21d7281e2de92eb4da550"} Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.351758 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.351775 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.351786 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.351925 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.367135 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.381408 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.417874 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.536996 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-ovsdbserver-sb\") pod \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.537321 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5wjq\" (UniqueName: \"kubernetes.io/projected/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-kube-api-access-f5wjq\") pod \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.537417 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-config\") pod \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.537448 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-dns-svc\") pod \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.537491 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-dns-swift-storage-0\") pod \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.537520 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-ovsdbserver-nb\") pod \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.552394 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-kube-api-access-f5wjq" (OuterVolumeSpecName: "kube-api-access-f5wjq") pod "8c0bd1b2-3ffe-443f-b632-b44ed96afc30" (UID: "8c0bd1b2-3ffe-443f-b632-b44ed96afc30"). InnerVolumeSpecName "kube-api-access-f5wjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.610270 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-697d496d6b-bz7zg"] Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.643648 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5wjq\" (UniqueName: \"kubernetes.io/projected/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-kube-api-access-f5wjq\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.644260 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-69f5f7d66b-p2q6s"] Feb 02 07:06:03 crc kubenswrapper[4842]: W0202 07:06:03.686443 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod948096a2_7fcf_4cb1_90da_90f3edbfd95b.slice/crio-c3712df80cf8e090f8874f31414aef8e53734ed43676c40d1bfb1fcb4a865741 WatchSource:0}: Error finding container c3712df80cf8e090f8874f31414aef8e53734ed43676c40d1bfb1fcb4a865741: Status 404 returned error can't find the container with id c3712df80cf8e090f8874f31414aef8e53734ed43676c40d1bfb1fcb4a865741 Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.693959 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-config" (OuterVolumeSpecName: "config") pod "8c0bd1b2-3ffe-443f-b632-b44ed96afc30" (UID: "8c0bd1b2-3ffe-443f-b632-b44ed96afc30"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.726945 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8c0bd1b2-3ffe-443f-b632-b44ed96afc30" (UID: "8c0bd1b2-3ffe-443f-b632-b44ed96afc30"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.728943 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8c0bd1b2-3ffe-443f-b632-b44ed96afc30" (UID: "8c0bd1b2-3ffe-443f-b632-b44ed96afc30"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.749731 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.750034 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.750048 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.751644 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8c0bd1b2-3ffe-443f-b632-b44ed96afc30" (UID: "8c0bd1b2-3ffe-443f-b632-b44ed96afc30"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.759715 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8c0bd1b2-3ffe-443f-b632-b44ed96afc30" (UID: "8c0bd1b2-3ffe-443f-b632-b44ed96afc30"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.852199 4842 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.852248 4842 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.890063 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-cdc46cdfc-px7hq"] Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.913317 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-hdddb"] Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.935125 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-77c4859bf4-qzmpm"] Feb 02 07:06:03 crc kubenswrapper[4842]: W0202 07:06:03.936263 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod595bc2a4_f0b8_4930_8c66_3b3da4cc4630.slice/crio-f735cc0c0ef98cb5751b1343a0d1aca16cf6fb764a0966b2ebc18ac2392a9b7d WatchSource:0}: Error finding container f735cc0c0ef98cb5751b1343a0d1aca16cf6fb764a0966b2ebc18ac2392a9b7d: Status 404 returned error can't find the container with id f735cc0c0ef98cb5751b1343a0d1aca16cf6fb764a0966b2ebc18ac2392a9b7d Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.105286 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5b5c67fdbd-zsx96"] Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.191240 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-57cc9f4749-jxzrq"] Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.225286 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-578f976b4-mj2qx"] Feb 02 07:06:04 crc kubenswrapper[4842]: W0202 07:06:04.250292 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac50621f_67cd_441d_99ea_6839f7f3b556.slice/crio-5c5a9a9e1c050c799b792ac4b78f2284f4eae1bc563dc03d2fe56329e1ad0873 WatchSource:0}: Error finding container 5c5a9a9e1c050c799b792ac4b78f2284f4eae1bc563dc03d2fe56329e1ad0873: Status 404 returned error can't find the container with id 5c5a9a9e1c050c799b792ac4b78f2284f4eae1bc563dc03d2fe56329e1ad0873 Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.371381 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-697d496d6b-bz7zg" event={"ID":"726c1772-2536-414e-a6ce-9c1437b021d1","Type":"ContainerStarted","Data":"3841fc7dcb9ce569457a802c09c27ff59529bd2560831414d8333da874fb2c77"} Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.375084 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cd7d86b6c-rcdjq" event={"ID":"7343dd67-a085-4da9-8d79-f25ea1e20ca6","Type":"ContainerStarted","Data":"4e6d71c03ef27703f095692cfb9e2c5680467263aa934bc2fe4e56b094edd765"} Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.376145 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.380004 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-578f976b4-mj2qx" event={"ID":"ac50621f-67cd-441d-99ea-6839f7f3b556","Type":"ContainerStarted","Data":"5c5a9a9e1c050c799b792ac4b78f2284f4eae1bc563dc03d2fe56329e1ad0873"} Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.395837 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" event={"ID":"8c0bd1b2-3ffe-443f-b632-b44ed96afc30","Type":"ContainerDied","Data":"cce78954b1aa2e246ca2d16f8b3a27b68612df254d83dcbe0635ca9b3466aaa0"} Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.395886 4842 scope.go:117] "RemoveContainer" containerID="05833980aa0f3fcdb343d056348768c4e89e806dedb21d7281e2de92eb4da550" Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.396019 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.399056 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cd7d86b6c-rcdjq" podStartSLOduration=2.399038335 podStartE2EDuration="2.399038335s" podCreationTimestamp="2026-02-02 07:06:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:06:04.39560866 +0000 UTC m=+1189.772876632" watchObservedRunningTime="2026-02-02 07:06:04.399038335 +0000 UTC m=+1189.776306247" Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.402083 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-57cc9f4749-jxzrq" event={"ID":"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd","Type":"ContainerStarted","Data":"1a2fdbaaf7cba0dd3058c59daa47fefc2d3624684698fe684e8a50e2db075890"} Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.416238 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b5c67fdbd-zsx96" event={"ID":"c56025ce-3772-435d-bdba-a4d1ba9d6e2f","Type":"ContainerStarted","Data":"33a7212242745098719539d77d7d2ab10cc0d6841f34ba8ac2dabc8a942c26b5"} Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.437764 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-cdc46cdfc-px7hq" event={"ID":"0d385ecd-3bd8-41cf-814b-6409c426dc80","Type":"ContainerStarted","Data":"c4839ac05fedf9ceb883263b26b3f9a42e354a5742d5701bc345aed976299c03"} Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.440460 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" event={"ID":"679e6e39-029a-452e-a375-bf0b937e3fbe","Type":"ContainerStarted","Data":"eb1c879ce0521868ffea7d5ca4ba1e741e4b7c55bb4a6410da53f5413323bc74"} Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.443964 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" event={"ID":"595bc2a4-f0b8-4930-8c66-3b3da4cc4630","Type":"ContainerStarted","Data":"f735cc0c0ef98cb5751b1343a0d1aca16cf6fb764a0966b2ebc18ac2392a9b7d"} Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.466363 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" event={"ID":"948096a2-7fcf-4cb1-90da-90f3edbfd95b","Type":"ContainerStarted","Data":"c3712df80cf8e090f8874f31414aef8e53734ed43676c40d1bfb1fcb4a865741"} Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.499300 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-jsqpk"] Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.506578 4842 scope.go:117] "RemoveContainer" containerID="82eafdb535c05f6b04556ae1baee492e7492a5e0fe1080d56e7f4182f6ac68b9" Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.510085 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-jsqpk"] Feb 02 07:06:04 crc kubenswrapper[4842]: E0202 07:06:04.748820 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod595bc2a4_f0b8_4930_8c66_3b3da4cc4630.slice/crio-b697a77798b314f9ac4ee3c53ca23704430e0f4eccb0fe586772468c61943fe2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c0bd1b2_3ffe_443f_b632_b44ed96afc30.slice\": RecentStats: unable to find data in memory cache]" Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.452191 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c0bd1b2-3ffe-443f-b632-b44ed96afc30" path="/var/lib/kubelet/pods/8c0bd1b2-3ffe-443f-b632-b44ed96afc30/volumes" Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.499782 4842 generic.go:334] "Generic (PLEG): container finished" podID="595bc2a4-f0b8-4930-8c66-3b3da4cc4630" containerID="b697a77798b314f9ac4ee3c53ca23704430e0f4eccb0fe586772468c61943fe2" exitCode=0 Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.499833 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" event={"ID":"595bc2a4-f0b8-4930-8c66-3b3da4cc4630","Type":"ContainerDied","Data":"b697a77798b314f9ac4ee3c53ca23704430e0f4eccb0fe586772468c61943fe2"} Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.523548 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b5c67fdbd-zsx96" event={"ID":"c56025ce-3772-435d-bdba-a4d1ba9d6e2f","Type":"ContainerStarted","Data":"c1cc1b81874f37b6dd69a794f4c89e58f1e938624f539804095c18ceb3989c67"} Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.523589 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b5c67fdbd-zsx96" event={"ID":"c56025ce-3772-435d-bdba-a4d1ba9d6e2f","Type":"ContainerStarted","Data":"6586c2e8f7af2e360086efaa4a8a6c6f2493d034bdc7ef3f3fa3fe1325d17da7"} Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.524391 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.524417 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.543364 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-697d496d6b-bz7zg" event={"ID":"726c1772-2536-414e-a6ce-9c1437b021d1","Type":"ContainerStarted","Data":"82a543d3d9cc00e4f8309fbaaed6e12bd0276e8a75a5a75d05dfd12644dff786"} Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.543403 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-697d496d6b-bz7zg" event={"ID":"726c1772-2536-414e-a6ce-9c1437b021d1","Type":"ContainerStarted","Data":"dc6d91d0986b64e793e6b5ee027d9ab62f264d291e919b8d22ff5580bd033fbe"} Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.544089 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.544117 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.549240 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-578f976b4-mj2qx" event={"ID":"ac50621f-67cd-441d-99ea-6839f7f3b556","Type":"ContainerStarted","Data":"589698e8022a3b189f2a3e9dad2ee18b515cc75e38ef79e256cca8b969f22e6f"} Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.549266 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-578f976b4-mj2qx" event={"ID":"ac50621f-67cd-441d-99ea-6839f7f3b556","Type":"ContainerStarted","Data":"2aaca1b2bb1165d98216c87b7292187d66c8775a2542b31141a6399a0f020777"} Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.549734 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.549759 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.552560 4842 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.552577 4842 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.632702 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-697d496d6b-bz7zg" podStartSLOduration=3.632685414 podStartE2EDuration="3.632685414s" podCreationTimestamp="2026-02-02 07:06:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:06:05.632103059 +0000 UTC m=+1191.009370961" watchObservedRunningTime="2026-02-02 07:06:05.632685414 +0000 UTC m=+1191.009953326" Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.662403 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5b5c67fdbd-zsx96" podStartSLOduration=3.662386345 podStartE2EDuration="3.662386345s" podCreationTimestamp="2026-02-02 07:06:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:06:05.65403657 +0000 UTC m=+1191.031304492" watchObservedRunningTime="2026-02-02 07:06:05.662386345 +0000 UTC m=+1191.039654257" Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.702675 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-578f976b4-mj2qx" podStartSLOduration=3.702656567 podStartE2EDuration="3.702656567s" podCreationTimestamp="2026-02-02 07:06:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:06:05.682494051 +0000 UTC m=+1191.059761963" watchObservedRunningTime="2026-02-02 07:06:05.702656567 +0000 UTC m=+1191.079924469" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.054881 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.056379 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.086772 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.086898 4842 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.089498 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.286606 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5cc5c967fd-w6ljx"] Feb 02 07:06:06 crc kubenswrapper[4842]: E0202 07:06:06.286986 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c0bd1b2-3ffe-443f-b632-b44ed96afc30" containerName="init" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.287003 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c0bd1b2-3ffe-443f-b632-b44ed96afc30" containerName="init" Feb 02 07:06:06 crc kubenswrapper[4842]: E0202 07:06:06.287030 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c0bd1b2-3ffe-443f-b632-b44ed96afc30" containerName="dnsmasq-dns" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.287036 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c0bd1b2-3ffe-443f-b632-b44ed96afc30" containerName="dnsmasq-dns" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.287200 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c0bd1b2-3ffe-443f-b632-b44ed96afc30" containerName="dnsmasq-dns" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.288081 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.293023 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.294174 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.315017 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5cc5c967fd-w6ljx"] Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.422901 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zscmk\" (UniqueName: \"kubernetes.io/projected/eb022115-b53a-4ed0-a2a0-b44644dc26a7-kube-api-access-zscmk\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.423183 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-public-tls-certs\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.423231 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-combined-ca-bundle\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.423284 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-config-data\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.423317 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-config-data-custom\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.423345 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb022115-b53a-4ed0-a2a0-b44644dc26a7-logs\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.423368 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-internal-tls-certs\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.525145 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zscmk\" (UniqueName: \"kubernetes.io/projected/eb022115-b53a-4ed0-a2a0-b44644dc26a7-kube-api-access-zscmk\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.525190 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-public-tls-certs\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.525240 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-combined-ca-bundle\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.525297 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-config-data\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.525327 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-config-data-custom\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.525358 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb022115-b53a-4ed0-a2a0-b44644dc26a7-logs\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.525384 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-internal-tls-certs\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.528483 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb022115-b53a-4ed0-a2a0-b44644dc26a7-logs\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.534826 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-public-tls-certs\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.535564 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-internal-tls-certs\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.536815 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-config-data-custom\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.537350 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-combined-ca-bundle\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.541283 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-config-data\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.556864 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zscmk\" (UniqueName: \"kubernetes.io/projected/eb022115-b53a-4ed0-a2a0-b44644dc26a7-kube-api-access-zscmk\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.619824 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:07 crc kubenswrapper[4842]: I0202 07:06:07.876315 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5cc5c967fd-w6ljx"] Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.610466 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" event={"ID":"948096a2-7fcf-4cb1-90da-90f3edbfd95b","Type":"ContainerStarted","Data":"e8efd3297967419921167c81ce13173df87124973698c673eee48fbd93fc77f6"} Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.610817 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" event={"ID":"948096a2-7fcf-4cb1-90da-90f3edbfd95b","Type":"ContainerStarted","Data":"a9547f640289b42444ca3a2a681d28cab4c4b05c2a274ac2247b743a8a11044d"} Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.624012 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-cdc46cdfc-px7hq" event={"ID":"0d385ecd-3bd8-41cf-814b-6409c426dc80","Type":"ContainerStarted","Data":"548af5f52aef73dc458ca274a43620dc086905dcd5fa415ca36e93646aa7f319"} Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.624055 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-cdc46cdfc-px7hq" event={"ID":"0d385ecd-3bd8-41cf-814b-6409c426dc80","Type":"ContainerStarted","Data":"70dea933b5cdfdaa531d37f7f6f82a6195fd31c430a47a6f0a2ae7fa37c9d4a1"} Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.640411 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" podStartSLOduration=3.036503608 podStartE2EDuration="6.640392293s" podCreationTimestamp="2026-02-02 07:06:02 +0000 UTC" firstStartedPulling="2026-02-02 07:06:03.711445167 +0000 UTC m=+1189.088713079" lastFinishedPulling="2026-02-02 07:06:07.315333852 +0000 UTC m=+1192.692601764" observedRunningTime="2026-02-02 07:06:08.629759191 +0000 UTC m=+1194.007027103" watchObservedRunningTime="2026-02-02 07:06:08.640392293 +0000 UTC m=+1194.017660205" Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.649250 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-57cc9f4749-jxzrq" event={"ID":"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd","Type":"ContainerStarted","Data":"36bc22b70997be0e1a4613b0f92eaab2935de0d49964ada65b21f18ae7b1478b"} Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.649306 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-57cc9f4749-jxzrq" event={"ID":"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd","Type":"ContainerStarted","Data":"2a1ff124f28b987212a2f7ed64a1bf208d310f3e9f13e80b4572c2dce5f8a5f9"} Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.661179 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-cdc46cdfc-px7hq" podStartSLOduration=3.232897435 podStartE2EDuration="6.661158344s" podCreationTimestamp="2026-02-02 07:06:02 +0000 UTC" firstStartedPulling="2026-02-02 07:06:03.915085593 +0000 UTC m=+1189.292353515" lastFinishedPulling="2026-02-02 07:06:07.343346512 +0000 UTC m=+1192.720614424" observedRunningTime="2026-02-02 07:06:08.649341323 +0000 UTC m=+1194.026609245" watchObservedRunningTime="2026-02-02 07:06:08.661158344 +0000 UTC m=+1194.038426246" Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.667411 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" event={"ID":"679e6e39-029a-452e-a375-bf0b937e3fbe","Type":"ContainerStarted","Data":"aee85aee5516dd19e05e53144d572bf0aa1bff0b09c36ebb0b91fd8f463420c6"} Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.667451 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" event={"ID":"679e6e39-029a-452e-a375-bf0b937e3fbe","Type":"ContainerStarted","Data":"5a24327ba4517226f20e20f0a45585d27dd9a1490c6050d591f1638384be7d6d"} Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.692197 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cc5c967fd-w6ljx" event={"ID":"eb022115-b53a-4ed0-a2a0-b44644dc26a7","Type":"ContainerStarted","Data":"83c2404b835485135c772ac74f310b1761d22ef1f63c10393be3a87c53fc66aa"} Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.692255 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cc5c967fd-w6ljx" event={"ID":"eb022115-b53a-4ed0-a2a0-b44644dc26a7","Type":"ContainerStarted","Data":"d4afe8e323946b2a091c267fa1099076188f1ad9d2a9b63f7930456fb99f3d8f"} Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.692275 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cc5c967fd-w6ljx" event={"ID":"eb022115-b53a-4ed0-a2a0-b44644dc26a7","Type":"ContainerStarted","Data":"fd6b7a98a2a46a28710ac379918018f758437a367de16692a4e1403ffd79ebbd"} Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.693363 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.693398 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.707980 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-57cc9f4749-jxzrq" podStartSLOduration=3.607139894 podStartE2EDuration="6.707957267s" podCreationTimestamp="2026-02-02 07:06:02 +0000 UTC" firstStartedPulling="2026-02-02 07:06:04.249749927 +0000 UTC m=+1189.627017839" lastFinishedPulling="2026-02-02 07:06:07.3505673 +0000 UTC m=+1192.727835212" observedRunningTime="2026-02-02 07:06:08.679744652 +0000 UTC m=+1194.057012584" watchObservedRunningTime="2026-02-02 07:06:08.707957267 +0000 UTC m=+1194.085225179" Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.734345 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" event={"ID":"595bc2a4-f0b8-4930-8c66-3b3da4cc4630","Type":"ContainerStarted","Data":"053391fc9b848177ff3e50865d7e17cdfe73b462de9b2367e66796f0824df117"} Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.734745 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.747549 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-cdc46cdfc-px7hq"] Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.747700 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" podStartSLOduration=3.378048641 podStartE2EDuration="6.747689916s" podCreationTimestamp="2026-02-02 07:06:02 +0000 UTC" firstStartedPulling="2026-02-02 07:06:03.945658006 +0000 UTC m=+1189.322925918" lastFinishedPulling="2026-02-02 07:06:07.315299291 +0000 UTC m=+1192.692567193" observedRunningTime="2026-02-02 07:06:08.716898458 +0000 UTC m=+1194.094166370" watchObservedRunningTime="2026-02-02 07:06:08.747689916 +0000 UTC m=+1194.124957828" Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.786842 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-69f5f7d66b-p2q6s"] Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.799492 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5cc5c967fd-w6ljx" podStartSLOduration=2.799474052 podStartE2EDuration="2.799474052s" podCreationTimestamp="2026-02-02 07:06:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:06:08.748976688 +0000 UTC m=+1194.126244600" watchObservedRunningTime="2026-02-02 07:06:08.799474052 +0000 UTC m=+1194.176741964" Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.808827 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" podStartSLOduration=6.808815492 podStartE2EDuration="6.808815492s" podCreationTimestamp="2026-02-02 07:06:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:06:08.774166498 +0000 UTC m=+1194.151434410" watchObservedRunningTime="2026-02-02 07:06:08.808815492 +0000 UTC m=+1194.186083394" Feb 02 07:06:09 crc kubenswrapper[4842]: I0202 07:06:09.747382 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-phj68" event={"ID":"d9f1c72e-953b-45ba-ba69-c7574f82e8ad","Type":"ContainerStarted","Data":"d6ab707ecf1e978e711e1ac029ea3186750e3b41e200559f065ad3d1d57c4081"} Feb 02 07:06:09 crc kubenswrapper[4842]: I0202 07:06:09.768272 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-phj68" podStartSLOduration=3.395597675 podStartE2EDuration="41.768251395s" podCreationTimestamp="2026-02-02 07:05:28 +0000 UTC" firstStartedPulling="2026-02-02 07:05:29.664886083 +0000 UTC m=+1155.042153995" lastFinishedPulling="2026-02-02 07:06:08.037539803 +0000 UTC m=+1193.414807715" observedRunningTime="2026-02-02 07:06:09.767832955 +0000 UTC m=+1195.145100867" watchObservedRunningTime="2026-02-02 07:06:09.768251395 +0000 UTC m=+1195.145519307" Feb 02 07:06:10 crc kubenswrapper[4842]: I0202 07:06:10.755830 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-cdc46cdfc-px7hq" podUID="0d385ecd-3bd8-41cf-814b-6409c426dc80" containerName="barbican-worker" containerID="cri-o://548af5f52aef73dc458ca274a43620dc086905dcd5fa415ca36e93646aa7f319" gracePeriod=30 Feb 02 07:06:10 crc kubenswrapper[4842]: I0202 07:06:10.755885 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-cdc46cdfc-px7hq" podUID="0d385ecd-3bd8-41cf-814b-6409c426dc80" containerName="barbican-worker-log" containerID="cri-o://70dea933b5cdfdaa531d37f7f6f82a6195fd31c430a47a6f0a2ae7fa37c9d4a1" gracePeriod=30 Feb 02 07:06:10 crc kubenswrapper[4842]: I0202 07:06:10.756191 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" podUID="948096a2-7fcf-4cb1-90da-90f3edbfd95b" containerName="barbican-keystone-listener" containerID="cri-o://e8efd3297967419921167c81ce13173df87124973698c673eee48fbd93fc77f6" gracePeriod=30 Feb 02 07:06:10 crc kubenswrapper[4842]: I0202 07:06:10.756129 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" podUID="948096a2-7fcf-4cb1-90da-90f3edbfd95b" containerName="barbican-keystone-listener-log" containerID="cri-o://a9547f640289b42444ca3a2a681d28cab4c4b05c2a274ac2247b743a8a11044d" gracePeriod=30 Feb 02 07:06:11 crc kubenswrapper[4842]: I0202 07:06:11.773915 4842 generic.go:334] "Generic (PLEG): container finished" podID="0d385ecd-3bd8-41cf-814b-6409c426dc80" containerID="548af5f52aef73dc458ca274a43620dc086905dcd5fa415ca36e93646aa7f319" exitCode=0 Feb 02 07:06:11 crc kubenswrapper[4842]: I0202 07:06:11.774895 4842 generic.go:334] "Generic (PLEG): container finished" podID="0d385ecd-3bd8-41cf-814b-6409c426dc80" containerID="70dea933b5cdfdaa531d37f7f6f82a6195fd31c430a47a6f0a2ae7fa37c9d4a1" exitCode=143 Feb 02 07:06:11 crc kubenswrapper[4842]: I0202 07:06:11.774904 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-cdc46cdfc-px7hq" event={"ID":"0d385ecd-3bd8-41cf-814b-6409c426dc80","Type":"ContainerDied","Data":"548af5f52aef73dc458ca274a43620dc086905dcd5fa415ca36e93646aa7f319"} Feb 02 07:06:11 crc kubenswrapper[4842]: I0202 07:06:11.775321 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-cdc46cdfc-px7hq" event={"ID":"0d385ecd-3bd8-41cf-814b-6409c426dc80","Type":"ContainerDied","Data":"70dea933b5cdfdaa531d37f7f6f82a6195fd31c430a47a6f0a2ae7fa37c9d4a1"} Feb 02 07:06:11 crc kubenswrapper[4842]: I0202 07:06:11.777535 4842 generic.go:334] "Generic (PLEG): container finished" podID="948096a2-7fcf-4cb1-90da-90f3edbfd95b" containerID="e8efd3297967419921167c81ce13173df87124973698c673eee48fbd93fc77f6" exitCode=0 Feb 02 07:06:11 crc kubenswrapper[4842]: I0202 07:06:11.777790 4842 generic.go:334] "Generic (PLEG): container finished" podID="948096a2-7fcf-4cb1-90da-90f3edbfd95b" containerID="a9547f640289b42444ca3a2a681d28cab4c4b05c2a274ac2247b743a8a11044d" exitCode=143 Feb 02 07:06:11 crc kubenswrapper[4842]: I0202 07:06:11.777748 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" event={"ID":"948096a2-7fcf-4cb1-90da-90f3edbfd95b","Type":"ContainerDied","Data":"e8efd3297967419921167c81ce13173df87124973698c673eee48fbd93fc77f6"} Feb 02 07:06:11 crc kubenswrapper[4842]: I0202 07:06:11.778076 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" event={"ID":"948096a2-7fcf-4cb1-90da-90f3edbfd95b","Type":"ContainerDied","Data":"a9547f640289b42444ca3a2a681d28cab4c4b05c2a274ac2247b743a8a11044d"} Feb 02 07:06:12 crc kubenswrapper[4842]: I0202 07:06:12.793629 4842 generic.go:334] "Generic (PLEG): container finished" podID="d9f1c72e-953b-45ba-ba69-c7574f82e8ad" containerID="d6ab707ecf1e978e711e1ac029ea3186750e3b41e200559f065ad3d1d57c4081" exitCode=0 Feb 02 07:06:12 crc kubenswrapper[4842]: I0202 07:06:12.793730 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-phj68" event={"ID":"d9f1c72e-953b-45ba-ba69-c7574f82e8ad","Type":"ContainerDied","Data":"d6ab707ecf1e978e711e1ac029ea3186750e3b41e200559f065ad3d1d57c4081"} Feb 02 07:06:13 crc kubenswrapper[4842]: I0202 07:06:13.195427 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:13 crc kubenswrapper[4842]: I0202 07:06:13.326899 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-b8t4s"] Feb 02 07:06:13 crc kubenswrapper[4842]: I0202 07:06:13.327103 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" podUID="cc29f5ed-e410-4d0a-ae66-ab78c89c6a49" containerName="dnsmasq-dns" containerID="cri-o://070ececa81450530af921167c87446de2343f6f27873a844bed7018478edcd17" gracePeriod=10 Feb 02 07:06:13 crc kubenswrapper[4842]: I0202 07:06:13.803722 4842 generic.go:334] "Generic (PLEG): container finished" podID="cc29f5ed-e410-4d0a-ae66-ab78c89c6a49" containerID="070ececa81450530af921167c87446de2343f6f27873a844bed7018478edcd17" exitCode=0 Feb 02 07:06:13 crc kubenswrapper[4842]: I0202 07:06:13.803817 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" event={"ID":"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49","Type":"ContainerDied","Data":"070ececa81450530af921167c87446de2343f6f27873a844bed7018478edcd17"} Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.321006 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" podUID="cc29f5ed-e410-4d0a-ae66-ab78c89c6a49" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.143:5353: connect: connection refused" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.395240 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.414364 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b767s\" (UniqueName: \"kubernetes.io/projected/0d385ecd-3bd8-41cf-814b-6409c426dc80-kube-api-access-b767s\") pod \"0d385ecd-3bd8-41cf-814b-6409c426dc80\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.414550 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-combined-ca-bundle\") pod \"0d385ecd-3bd8-41cf-814b-6409c426dc80\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.414616 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-config-data\") pod \"0d385ecd-3bd8-41cf-814b-6409c426dc80\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.414658 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d385ecd-3bd8-41cf-814b-6409c426dc80-logs\") pod \"0d385ecd-3bd8-41cf-814b-6409c426dc80\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.414705 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-config-data-custom\") pod \"0d385ecd-3bd8-41cf-814b-6409c426dc80\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.420037 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0d385ecd-3bd8-41cf-814b-6409c426dc80" (UID: "0d385ecd-3bd8-41cf-814b-6409c426dc80"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.423901 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d385ecd-3bd8-41cf-814b-6409c426dc80-kube-api-access-b767s" (OuterVolumeSpecName: "kube-api-access-b767s") pod "0d385ecd-3bd8-41cf-814b-6409c426dc80" (UID: "0d385ecd-3bd8-41cf-814b-6409c426dc80"). InnerVolumeSpecName "kube-api-access-b767s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.428765 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d385ecd-3bd8-41cf-814b-6409c426dc80-logs" (OuterVolumeSpecName: "logs") pod "0d385ecd-3bd8-41cf-814b-6409c426dc80" (UID: "0d385ecd-3bd8-41cf-814b-6409c426dc80"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.467516 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d385ecd-3bd8-41cf-814b-6409c426dc80" (UID: "0d385ecd-3bd8-41cf-814b-6409c426dc80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.472427 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.505541 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-config-data" (OuterVolumeSpecName: "config-data") pod "0d385ecd-3bd8-41cf-814b-6409c426dc80" (UID: "0d385ecd-3bd8-41cf-814b-6409c426dc80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.519181 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.519264 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.519281 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d385ecd-3bd8-41cf-814b-6409c426dc80-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.519293 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.519305 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b767s\" (UniqueName: \"kubernetes.io/projected/0d385ecd-3bd8-41cf-814b-6409c426dc80-kube-api-access-b767s\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.520536 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-phj68" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.590879 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.623481 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-config-data\") pod \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.623572 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-combined-ca-bundle\") pod \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.623610 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-scripts\") pod \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.623627 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-etc-machine-id\") pod \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.623650 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-combined-ca-bundle\") pod \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.623676 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-db-sync-config-data\") pod \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.623745 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-config-data-custom\") pod \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.623763 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-config-data\") pod \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.623810 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948096a2-7fcf-4cb1-90da-90f3edbfd95b-logs\") pod \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.623833 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6zc7\" (UniqueName: \"kubernetes.io/projected/948096a2-7fcf-4cb1-90da-90f3edbfd95b-kube-api-access-l6zc7\") pod \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.623879 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4nz2\" (UniqueName: \"kubernetes.io/projected/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-kube-api-access-v4nz2\") pod \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.627891 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d9f1c72e-953b-45ba-ba69-c7574f82e8ad" (UID: "d9f1c72e-953b-45ba-ba69-c7574f82e8ad"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.628325 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-kube-api-access-v4nz2" (OuterVolumeSpecName: "kube-api-access-v4nz2") pod "d9f1c72e-953b-45ba-ba69-c7574f82e8ad" (UID: "d9f1c72e-953b-45ba-ba69-c7574f82e8ad"). InnerVolumeSpecName "kube-api-access-v4nz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.629050 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/948096a2-7fcf-4cb1-90da-90f3edbfd95b-logs" (OuterVolumeSpecName: "logs") pod "948096a2-7fcf-4cb1-90da-90f3edbfd95b" (UID: "948096a2-7fcf-4cb1-90da-90f3edbfd95b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.629607 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d9f1c72e-953b-45ba-ba69-c7574f82e8ad" (UID: "d9f1c72e-953b-45ba-ba69-c7574f82e8ad"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.631548 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-scripts" (OuterVolumeSpecName: "scripts") pod "d9f1c72e-953b-45ba-ba69-c7574f82e8ad" (UID: "d9f1c72e-953b-45ba-ba69-c7574f82e8ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.634109 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/948096a2-7fcf-4cb1-90da-90f3edbfd95b-kube-api-access-l6zc7" (OuterVolumeSpecName: "kube-api-access-l6zc7") pod "948096a2-7fcf-4cb1-90da-90f3edbfd95b" (UID: "948096a2-7fcf-4cb1-90da-90f3edbfd95b"). InnerVolumeSpecName "kube-api-access-l6zc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.636124 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "948096a2-7fcf-4cb1-90da-90f3edbfd95b" (UID: "948096a2-7fcf-4cb1-90da-90f3edbfd95b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.658004 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "948096a2-7fcf-4cb1-90da-90f3edbfd95b" (UID: "948096a2-7fcf-4cb1-90da-90f3edbfd95b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.690358 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9f1c72e-953b-45ba-ba69-c7574f82e8ad" (UID: "d9f1c72e-953b-45ba-ba69-c7574f82e8ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.696859 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-config-data" (OuterVolumeSpecName: "config-data") pod "d9f1c72e-953b-45ba-ba69-c7574f82e8ad" (UID: "d9f1c72e-953b-45ba-ba69-c7574f82e8ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.698881 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-config-data" (OuterVolumeSpecName: "config-data") pod "948096a2-7fcf-4cb1-90da-90f3edbfd95b" (UID: "948096a2-7fcf-4cb1-90da-90f3edbfd95b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.726298 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trrjw\" (UniqueName: \"kubernetes.io/projected/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-kube-api-access-trrjw\") pod \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.726401 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-dns-svc\") pod \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.726463 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-ovsdbserver-sb\") pod \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.726491 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-dns-swift-storage-0\") pod \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.726520 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-ovsdbserver-nb\") pod \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.726597 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-config\") pod \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.726940 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.726955 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.726966 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.726975 4842 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.726984 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.726993 4842 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.727000 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.727008 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.727016 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948096a2-7fcf-4cb1-90da-90f3edbfd95b-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.727024 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6zc7\" (UniqueName: \"kubernetes.io/projected/948096a2-7fcf-4cb1-90da-90f3edbfd95b-kube-api-access-l6zc7\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.727031 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4nz2\" (UniqueName: \"kubernetes.io/projected/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-kube-api-access-v4nz2\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.729309 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-kube-api-access-trrjw" (OuterVolumeSpecName: "kube-api-access-trrjw") pod "cc29f5ed-e410-4d0a-ae66-ab78c89c6a49" (UID: "cc29f5ed-e410-4d0a-ae66-ab78c89c6a49"). InnerVolumeSpecName "kube-api-access-trrjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.762648 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cc29f5ed-e410-4d0a-ae66-ab78c89c6a49" (UID: "cc29f5ed-e410-4d0a-ae66-ab78c89c6a49"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.766814 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cc29f5ed-e410-4d0a-ae66-ab78c89c6a49" (UID: "cc29f5ed-e410-4d0a-ae66-ab78c89c6a49"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.773262 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cc29f5ed-e410-4d0a-ae66-ab78c89c6a49" (UID: "cc29f5ed-e410-4d0a-ae66-ab78c89c6a49"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.779854 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cc29f5ed-e410-4d0a-ae66-ab78c89c6a49" (UID: "cc29f5ed-e410-4d0a-ae66-ab78c89c6a49"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.789595 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-config" (OuterVolumeSpecName: "config") pod "cc29f5ed-e410-4d0a-ae66-ab78c89c6a49" (UID: "cc29f5ed-e410-4d0a-ae66-ab78c89c6a49"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.815710 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.815704 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-cdc46cdfc-px7hq" event={"ID":"0d385ecd-3bd8-41cf-814b-6409c426dc80","Type":"ContainerDied","Data":"c4839ac05fedf9ceb883263b26b3f9a42e354a5742d5701bc345aed976299c03"} Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.816002 4842 scope.go:117] "RemoveContainer" containerID="548af5f52aef73dc458ca274a43620dc086905dcd5fa415ca36e93646aa7f319" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.822822 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7aab5ec-829b-42dd-89db-74e28ab9346e","Type":"ContainerStarted","Data":"fe3375b909f92cb4bbe73dec1a8b9dd6bf271192a5cfdeabb2b30b199ea28fc0"} Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.822926 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerName="ceilometer-central-agent" containerID="cri-o://2f1f71359696d01a5862009ba293a284a700d2d113c3d648dd2fd55ef0a71132" gracePeriod=30 Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.822965 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.823046 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerName="proxy-httpd" containerID="cri-o://fe3375b909f92cb4bbe73dec1a8b9dd6bf271192a5cfdeabb2b30b199ea28fc0" gracePeriod=30 Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.823087 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerName="sg-core" containerID="cri-o://46a4ec7b1a2bf914002a2bbd86c470d96a9acddcc7f5c8732c24027d3a07b921" gracePeriod=30 Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.823117 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerName="ceilometer-notification-agent" containerID="cri-o://489c01ede4a0ab782872bdaed559698536c0754fc4c6b18af574f3dd700850cf" gracePeriod=30 Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.830966 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trrjw\" (UniqueName: \"kubernetes.io/projected/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-kube-api-access-trrjw\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.831013 4842 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.831033 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.831053 4842 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.831070 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.831086 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.831800 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.831799 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" event={"ID":"948096a2-7fcf-4cb1-90da-90f3edbfd95b","Type":"ContainerDied","Data":"c3712df80cf8e090f8874f31414aef8e53734ed43676c40d1bfb1fcb4a865741"} Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.834259 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-phj68" event={"ID":"d9f1c72e-953b-45ba-ba69-c7574f82e8ad","Type":"ContainerDied","Data":"e0942641dc8319ec78eeb7f961a7a30b1fb70ac7a621c74e1e520f1227c8c704"} Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.834287 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0942641dc8319ec78eeb7f961a7a30b1fb70ac7a621c74e1e520f1227c8c704" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.835749 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-phj68" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.840493 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.844110 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" event={"ID":"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49","Type":"ContainerDied","Data":"3bf1c02d1eb4a6fd6bfb8e0d7089ca1be72bb9eccd12b09bde66e78b797862a2"} Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.844159 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.852341 4842 scope.go:117] "RemoveContainer" containerID="70dea933b5cdfdaa531d37f7f6f82a6195fd31c430a47a6f0a2ae7fa37c9d4a1" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:14.899738 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.396628368 podStartE2EDuration="46.89972034s" podCreationTimestamp="2026-02-02 07:05:28 +0000 UTC" firstStartedPulling="2026-02-02 07:05:29.8194449 +0000 UTC m=+1155.196712812" lastFinishedPulling="2026-02-02 07:06:14.322536852 +0000 UTC m=+1199.699804784" observedRunningTime="2026-02-02 07:06:14.853388199 +0000 UTC m=+1200.230656111" watchObservedRunningTime="2026-02-02 07:06:14.89972034 +0000 UTC m=+1200.276988252" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:14.926599 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-cdc46cdfc-px7hq"] Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:14.946751 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-cdc46cdfc-px7hq"] Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:14.972001 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.097566 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-zwbb4"] Feb 02 07:06:15 crc kubenswrapper[4842]: E0202 07:06:15.098015 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="948096a2-7fcf-4cb1-90da-90f3edbfd95b" containerName="barbican-keystone-listener" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.098032 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="948096a2-7fcf-4cb1-90da-90f3edbfd95b" containerName="barbican-keystone-listener" Feb 02 07:06:15 crc kubenswrapper[4842]: E0202 07:06:15.098054 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="948096a2-7fcf-4cb1-90da-90f3edbfd95b" containerName="barbican-keystone-listener-log" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.098060 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="948096a2-7fcf-4cb1-90da-90f3edbfd95b" containerName="barbican-keystone-listener-log" Feb 02 07:06:15 crc kubenswrapper[4842]: E0202 07:06:15.098069 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc29f5ed-e410-4d0a-ae66-ab78c89c6a49" containerName="dnsmasq-dns" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.098074 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc29f5ed-e410-4d0a-ae66-ab78c89c6a49" containerName="dnsmasq-dns" Feb 02 07:06:15 crc kubenswrapper[4842]: E0202 07:06:15.098086 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d385ecd-3bd8-41cf-814b-6409c426dc80" containerName="barbican-worker" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.098092 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d385ecd-3bd8-41cf-814b-6409c426dc80" containerName="barbican-worker" Feb 02 07:06:15 crc kubenswrapper[4842]: E0202 07:06:15.098104 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d385ecd-3bd8-41cf-814b-6409c426dc80" containerName="barbican-worker-log" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.098110 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d385ecd-3bd8-41cf-814b-6409c426dc80" containerName="barbican-worker-log" Feb 02 07:06:15 crc kubenswrapper[4842]: E0202 07:06:15.098117 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc29f5ed-e410-4d0a-ae66-ab78c89c6a49" containerName="init" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.098122 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc29f5ed-e410-4d0a-ae66-ab78c89c6a49" containerName="init" Feb 02 07:06:15 crc kubenswrapper[4842]: E0202 07:06:15.098131 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9f1c72e-953b-45ba-ba69-c7574f82e8ad" containerName="cinder-db-sync" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.098137 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9f1c72e-953b-45ba-ba69-c7574f82e8ad" containerName="cinder-db-sync" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.098309 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc29f5ed-e410-4d0a-ae66-ab78c89c6a49" containerName="dnsmasq-dns" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.098323 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="948096a2-7fcf-4cb1-90da-90f3edbfd95b" containerName="barbican-keystone-listener-log" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.098340 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d385ecd-3bd8-41cf-814b-6409c426dc80" containerName="barbican-worker" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.098353 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="948096a2-7fcf-4cb1-90da-90f3edbfd95b" containerName="barbican-keystone-listener" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.098362 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9f1c72e-953b-45ba-ba69-c7574f82e8ad" containerName="cinder-db-sync" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.098368 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d385ecd-3bd8-41cf-814b-6409c426dc80" containerName="barbican-worker-log" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.099197 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.108382 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.110087 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.113508 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.113745 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.113868 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.114004 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-fr64b" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.120542 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-zwbb4"] Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.144375 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.154352 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d737380b-08d3-455f-a9a7-080d76cabc9f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.154390 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.154424 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-config\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.154460 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.154481 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.154554 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbgnn\" (UniqueName: \"kubernetes.io/projected/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-kube-api-access-nbgnn\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.154595 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw2ng\" (UniqueName: \"kubernetes.io/projected/d737380b-08d3-455f-a9a7-080d76cabc9f-kube-api-access-gw2ng\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.154611 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.154631 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-config-data\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.154648 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.154675 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.154803 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-scripts\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.235287 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.250045 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.252448 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255538 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-scripts\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255573 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255604 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d737380b-08d3-455f-a9a7-080d76cabc9f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255623 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255647 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-config\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255669 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-scripts\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255683 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255700 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255730 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-config-data-custom\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255750 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbgnn\" (UniqueName: \"kubernetes.io/projected/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-kube-api-access-nbgnn\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255773 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thgcz\" (UniqueName: \"kubernetes.io/projected/ccb5d691-9421-4007-8184-b3885f746622-kube-api-access-thgcz\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255792 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw2ng\" (UniqueName: \"kubernetes.io/projected/d737380b-08d3-455f-a9a7-080d76cabc9f-kube-api-access-gw2ng\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255807 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255833 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-config-data\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255850 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255871 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255903 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ccb5d691-9421-4007-8184-b3885f746622-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255921 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-config-data\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255941 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccb5d691-9421-4007-8184-b3885f746622-logs\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.256544 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d737380b-08d3-455f-a9a7-080d76cabc9f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.260021 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.260873 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-scripts\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.261030 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.261758 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-config\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.262890 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.264801 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.267971 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.272877 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.274874 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.289176 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-config-data\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.301339 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw2ng\" (UniqueName: \"kubernetes.io/projected/d737380b-08d3-455f-a9a7-080d76cabc9f-kube-api-access-gw2ng\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.304977 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbgnn\" (UniqueName: \"kubernetes.io/projected/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-kube-api-access-nbgnn\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.356643 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.356710 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-scripts\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.356747 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-config-data-custom\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.356778 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thgcz\" (UniqueName: \"kubernetes.io/projected/ccb5d691-9421-4007-8184-b3885f746622-kube-api-access-thgcz\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.356825 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ccb5d691-9421-4007-8184-b3885f746622-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.356846 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-config-data\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.356864 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccb5d691-9421-4007-8184-b3885f746622-logs\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.357157 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ccb5d691-9421-4007-8184-b3885f746622-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.357356 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccb5d691-9421-4007-8184-b3885f746622-logs\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.359043 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.361186 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.362523 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-scripts\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.363748 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-config-data\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.377791 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-config-data-custom\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.378073 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thgcz\" (UniqueName: \"kubernetes.io/projected/ccb5d691-9421-4007-8184-b3885f746622-kube-api-access-thgcz\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.448699 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d385ecd-3bd8-41cf-814b-6409c426dc80" path="/var/lib/kubelet/pods/0d385ecd-3bd8-41cf-814b-6409c426dc80/volumes" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.503921 4842 scope.go:117] "RemoveContainer" containerID="e8efd3297967419921167c81ce13173df87124973698c673eee48fbd93fc77f6" Feb 02 07:06:15 crc kubenswrapper[4842]: E0202 07:06:15.516197 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d385ecd_3bd8_41cf_814b_6409c426dc80.slice/crio-c4839ac05fedf9ceb883263b26b3f9a42e354a5742d5701bc345aed976299c03\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d385ecd_3bd8_41cf_814b_6409c426dc80.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7aab5ec_829b_42dd_89db_74e28ab9346e.slice/crio-fe3375b909f92cb4bbe73dec1a8b9dd6bf271192a5cfdeabb2b30b199ea28fc0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7aab5ec_829b_42dd_89db_74e28ab9346e.slice/crio-conmon-fe3375b909f92cb4bbe73dec1a8b9dd6bf271192a5cfdeabb2b30b199ea28fc0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7aab5ec_829b_42dd_89db_74e28ab9346e.slice/crio-conmon-2f1f71359696d01a5862009ba293a284a700d2d113c3d648dd2fd55ef0a71132.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod948096a2_7fcf_4cb1_90da_90f3edbfd95b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7aab5ec_829b_42dd_89db_74e28ab9346e.slice/crio-2f1f71359696d01a5862009ba293a284a700d2d113c3d648dd2fd55ef0a71132.scope\": RecentStats: unable to find data in memory cache]" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.548377 4842 scope.go:117] "RemoveContainer" containerID="a9547f640289b42444ca3a2a681d28cab4c4b05c2a274ac2247b743a8a11044d" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.549698 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.550487 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-fr64b" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.560675 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-69f5f7d66b-p2q6s"] Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.562310 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.562476 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.590545 4842 scope.go:117] "RemoveContainer" containerID="070ececa81450530af921167c87446de2343f6f27873a844bed7018478edcd17" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.598895 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-69f5f7d66b-p2q6s"] Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.608342 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-b8t4s"] Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.622271 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-b8t4s"] Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.660694 4842 scope.go:117] "RemoveContainer" containerID="b65de85796493b7fd1d1b4d84ddbf8a0d1cb6cbceca0fba243ff835d64eb5002" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.858633 4842 generic.go:334] "Generic (PLEG): container finished" podID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerID="fe3375b909f92cb4bbe73dec1a8b9dd6bf271192a5cfdeabb2b30b199ea28fc0" exitCode=0 Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.858839 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7aab5ec-829b-42dd-89db-74e28ab9346e","Type":"ContainerDied","Data":"fe3375b909f92cb4bbe73dec1a8b9dd6bf271192a5cfdeabb2b30b199ea28fc0"} Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.858891 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7aab5ec-829b-42dd-89db-74e28ab9346e","Type":"ContainerDied","Data":"46a4ec7b1a2bf914002a2bbd86c470d96a9acddcc7f5c8732c24027d3a07b921"} Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.858859 4842 generic.go:334] "Generic (PLEG): container finished" podID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerID="46a4ec7b1a2bf914002a2bbd86c470d96a9acddcc7f5c8732c24027d3a07b921" exitCode=2 Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.858915 4842 generic.go:334] "Generic (PLEG): container finished" podID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerID="2f1f71359696d01a5862009ba293a284a700d2d113c3d648dd2fd55ef0a71132" exitCode=0 Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.858960 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7aab5ec-829b-42dd-89db-74e28ab9346e","Type":"ContainerDied","Data":"2f1f71359696d01a5862009ba293a284a700d2d113c3d648dd2fd55ef0a71132"} Feb 02 07:06:16 crc kubenswrapper[4842]: I0202 07:06:16.092345 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-zwbb4"] Feb 02 07:06:16 crc kubenswrapper[4842]: W0202 07:06:16.096575 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e3c4cab_c86f_4819_8d09_ac45ccb6ea16.slice/crio-1e6b63a560dc8cb262f32d7a92ff245402cd7c329b5c9d29fa17e9ebc50d169c WatchSource:0}: Error finding container 1e6b63a560dc8cb262f32d7a92ff245402cd7c329b5c9d29fa17e9ebc50d169c: Status 404 returned error can't find the container with id 1e6b63a560dc8cb262f32d7a92ff245402cd7c329b5c9d29fa17e9ebc50d169c Feb 02 07:06:16 crc kubenswrapper[4842]: W0202 07:06:16.143237 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccb5d691_9421_4007_8184_b3885f746622.slice/crio-0a559b7323dca0655523697c26ea9fa913f9065dad8b4f84d8e4b5e4851d5eac WatchSource:0}: Error finding container 0a559b7323dca0655523697c26ea9fa913f9065dad8b4f84d8e4b5e4851d5eac: Status 404 returned error can't find the container with id 0a559b7323dca0655523697c26ea9fa913f9065dad8b4f84d8e4b5e4851d5eac Feb 02 07:06:16 crc kubenswrapper[4842]: I0202 07:06:16.156998 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 07:06:16 crc kubenswrapper[4842]: I0202 07:06:16.207127 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 07:06:16 crc kubenswrapper[4842]: W0202 07:06:16.214351 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd737380b_08d3_455f_a9a7_080d76cabc9f.slice/crio-448240e5421a87237dad04890b2a4f40bc671d8ec2cf606c184317a141cf69db WatchSource:0}: Error finding container 448240e5421a87237dad04890b2a4f40bc671d8ec2cf606c184317a141cf69db: Status 404 returned error can't find the container with id 448240e5421a87237dad04890b2a4f40bc671d8ec2cf606c184317a141cf69db Feb 02 07:06:16 crc kubenswrapper[4842]: I0202 07:06:16.883926 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d737380b-08d3-455f-a9a7-080d76cabc9f","Type":"ContainerStarted","Data":"448240e5421a87237dad04890b2a4f40bc671d8ec2cf606c184317a141cf69db"} Feb 02 07:06:16 crc kubenswrapper[4842]: I0202 07:06:16.893606 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ccb5d691-9421-4007-8184-b3885f746622","Type":"ContainerStarted","Data":"0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547"} Feb 02 07:06:16 crc kubenswrapper[4842]: I0202 07:06:16.893650 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ccb5d691-9421-4007-8184-b3885f746622","Type":"ContainerStarted","Data":"0a559b7323dca0655523697c26ea9fa913f9065dad8b4f84d8e4b5e4851d5eac"} Feb 02 07:06:16 crc kubenswrapper[4842]: I0202 07:06:16.898557 4842 generic.go:334] "Generic (PLEG): container finished" podID="0e3c4cab-c86f-4819-8d09-ac45ccb6ea16" containerID="69afbd01ab369f9ef7aca7e64e6b27b9c62915c91cb3c8a3caf0848c2efc9775" exitCode=0 Feb 02 07:06:16 crc kubenswrapper[4842]: I0202 07:06:16.898622 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" event={"ID":"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16","Type":"ContainerDied","Data":"69afbd01ab369f9ef7aca7e64e6b27b9c62915c91cb3c8a3caf0848c2efc9775"} Feb 02 07:06:16 crc kubenswrapper[4842]: I0202 07:06:16.898652 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" event={"ID":"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16","Type":"ContainerStarted","Data":"1e6b63a560dc8cb262f32d7a92ff245402cd7c329b5c9d29fa17e9ebc50d169c"} Feb 02 07:06:17 crc kubenswrapper[4842]: I0202 07:06:17.194569 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 02 07:06:17 crc kubenswrapper[4842]: I0202 07:06:17.455004 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="948096a2-7fcf-4cb1-90da-90f3edbfd95b" path="/var/lib/kubelet/pods/948096a2-7fcf-4cb1-90da-90f3edbfd95b/volumes" Feb 02 07:06:17 crc kubenswrapper[4842]: I0202 07:06:17.455891 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc29f5ed-e410-4d0a-ae66-ab78c89c6a49" path="/var/lib/kubelet/pods/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49/volumes" Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.160901 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.410946 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.469688 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-578f976b4-mj2qx"] Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.469899 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-578f976b4-mj2qx" podUID="ac50621f-67cd-441d-99ea-6839f7f3b556" containerName="barbican-api-log" containerID="cri-o://2aaca1b2bb1165d98216c87b7292187d66c8775a2542b31141a6399a0f020777" gracePeriod=30 Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.470288 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-578f976b4-mj2qx" podUID="ac50621f-67cd-441d-99ea-6839f7f3b556" containerName="barbican-api" containerID="cri-o://589698e8022a3b189f2a3e9dad2ee18b515cc75e38ef79e256cca8b969f22e6f" gracePeriod=30 Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.930928 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d737380b-08d3-455f-a9a7-080d76cabc9f","Type":"ContainerStarted","Data":"54284a46ac09d894f4ded8d4490b29e31ca3f5c624e7f4069d128d4f574ec681"} Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.931282 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d737380b-08d3-455f-a9a7-080d76cabc9f","Type":"ContainerStarted","Data":"2c8ee50e4f65881fd7304ba6c36f7a3d6a7b1ea6446992c1865f5077f7b9fd3b"} Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.934637 4842 generic.go:334] "Generic (PLEG): container finished" podID="ac50621f-67cd-441d-99ea-6839f7f3b556" containerID="2aaca1b2bb1165d98216c87b7292187d66c8775a2542b31141a6399a0f020777" exitCode=143 Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.934719 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-578f976b4-mj2qx" event={"ID":"ac50621f-67cd-441d-99ea-6839f7f3b556","Type":"ContainerDied","Data":"2aaca1b2bb1165d98216c87b7292187d66c8775a2542b31141a6399a0f020777"} Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.936616 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" event={"ID":"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16","Type":"ContainerStarted","Data":"ded17f227db2c861bcd18849f326f400b19bd42b6b572e71db0154b4815da1cb"} Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.936798 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.938613 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ccb5d691-9421-4007-8184-b3885f746622","Type":"ContainerStarted","Data":"63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad"} Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.938697 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ccb5d691-9421-4007-8184-b3885f746622" containerName="cinder-api-log" containerID="cri-o://0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547" gracePeriod=30 Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.938798 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.938844 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ccb5d691-9421-4007-8184-b3885f746622" containerName="cinder-api" containerID="cri-o://63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad" gracePeriod=30 Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.949073 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.312338565 podStartE2EDuration="3.949060909s" podCreationTimestamp="2026-02-02 07:06:15 +0000 UTC" firstStartedPulling="2026-02-02 07:06:16.217810629 +0000 UTC m=+1201.595078541" lastFinishedPulling="2026-02-02 07:06:16.854532973 +0000 UTC m=+1202.231800885" observedRunningTime="2026-02-02 07:06:18.9466823 +0000 UTC m=+1204.323950212" watchObservedRunningTime="2026-02-02 07:06:18.949060909 +0000 UTC m=+1204.326328811" Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.972048 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" podStartSLOduration=3.972034945 podStartE2EDuration="3.972034945s" podCreationTimestamp="2026-02-02 07:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:06:18.970457716 +0000 UTC m=+1204.347725628" watchObservedRunningTime="2026-02-02 07:06:18.972034945 +0000 UTC m=+1204.349302857" Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.991399 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.991384251 podStartE2EDuration="3.991384251s" podCreationTimestamp="2026-02-02 07:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:06:18.987878265 +0000 UTC m=+1204.365146207" watchObservedRunningTime="2026-02-02 07:06:18.991384251 +0000 UTC m=+1204.368652163" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.607111 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.675856 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-config-data\") pod \"ccb5d691-9421-4007-8184-b3885f746622\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.675971 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ccb5d691-9421-4007-8184-b3885f746622-etc-machine-id\") pod \"ccb5d691-9421-4007-8184-b3885f746622\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.676012 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-scripts\") pod \"ccb5d691-9421-4007-8184-b3885f746622\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.676046 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccb5d691-9421-4007-8184-b3885f746622-logs\") pod \"ccb5d691-9421-4007-8184-b3885f746622\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.676196 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thgcz\" (UniqueName: \"kubernetes.io/projected/ccb5d691-9421-4007-8184-b3885f746622-kube-api-access-thgcz\") pod \"ccb5d691-9421-4007-8184-b3885f746622\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.676301 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-config-data-custom\") pod \"ccb5d691-9421-4007-8184-b3885f746622\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.676340 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-combined-ca-bundle\") pod \"ccb5d691-9421-4007-8184-b3885f746622\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.677493 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccb5d691-9421-4007-8184-b3885f746622-logs" (OuterVolumeSpecName: "logs") pod "ccb5d691-9421-4007-8184-b3885f746622" (UID: "ccb5d691-9421-4007-8184-b3885f746622"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.677544 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccb5d691-9421-4007-8184-b3885f746622-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ccb5d691-9421-4007-8184-b3885f746622" (UID: "ccb5d691-9421-4007-8184-b3885f746622"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.690777 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-scripts" (OuterVolumeSpecName: "scripts") pod "ccb5d691-9421-4007-8184-b3885f746622" (UID: "ccb5d691-9421-4007-8184-b3885f746622"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.690818 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ccb5d691-9421-4007-8184-b3885f746622" (UID: "ccb5d691-9421-4007-8184-b3885f746622"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.696805 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccb5d691-9421-4007-8184-b3885f746622-kube-api-access-thgcz" (OuterVolumeSpecName: "kube-api-access-thgcz") pod "ccb5d691-9421-4007-8184-b3885f746622" (UID: "ccb5d691-9421-4007-8184-b3885f746622"). InnerVolumeSpecName "kube-api-access-thgcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.746363 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ccb5d691-9421-4007-8184-b3885f746622" (UID: "ccb5d691-9421-4007-8184-b3885f746622"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.787818 4842 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ccb5d691-9421-4007-8184-b3885f746622-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.788002 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.788084 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccb5d691-9421-4007-8184-b3885f746622-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.788157 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thgcz\" (UniqueName: \"kubernetes.io/projected/ccb5d691-9421-4007-8184-b3885f746622-kube-api-access-thgcz\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.788320 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.788407 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.826354 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-config-data" (OuterVolumeSpecName: "config-data") pod "ccb5d691-9421-4007-8184-b3885f746622" (UID: "ccb5d691-9421-4007-8184-b3885f746622"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.884861 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.890337 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.948100 4842 generic.go:334] "Generic (PLEG): container finished" podID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerID="489c01ede4a0ab782872bdaed559698536c0754fc4c6b18af574f3dd700850cf" exitCode=0 Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.948152 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7aab5ec-829b-42dd-89db-74e28ab9346e","Type":"ContainerDied","Data":"489c01ede4a0ab782872bdaed559698536c0754fc4c6b18af574f3dd700850cf"} Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.948177 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7aab5ec-829b-42dd-89db-74e28ab9346e","Type":"ContainerDied","Data":"7ea6f3db6a36a7dee937382b0699d18f0905deeb5700b93c12a3f06c02d6628f"} Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.948192 4842 scope.go:117] "RemoveContainer" containerID="fe3375b909f92cb4bbe73dec1a8b9dd6bf271192a5cfdeabb2b30b199ea28fc0" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.948316 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.952257 4842 generic.go:334] "Generic (PLEG): container finished" podID="ccb5d691-9421-4007-8184-b3885f746622" containerID="63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad" exitCode=0 Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.952274 4842 generic.go:334] "Generic (PLEG): container finished" podID="ccb5d691-9421-4007-8184-b3885f746622" containerID="0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547" exitCode=143 Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.952954 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.955792 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ccb5d691-9421-4007-8184-b3885f746622","Type":"ContainerDied","Data":"63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad"} Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.955860 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ccb5d691-9421-4007-8184-b3885f746622","Type":"ContainerDied","Data":"0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547"} Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.955876 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ccb5d691-9421-4007-8184-b3885f746622","Type":"ContainerDied","Data":"0a559b7323dca0655523697c26ea9fa913f9065dad8b4f84d8e4b5e4851d5eac"} Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.975636 4842 scope.go:117] "RemoveContainer" containerID="46a4ec7b1a2bf914002a2bbd86c470d96a9acddcc7f5c8732c24027d3a07b921" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.991139 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7aab5ec-829b-42dd-89db-74e28ab9346e-run-httpd\") pod \"e7aab5ec-829b-42dd-89db-74e28ab9346e\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.991591 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7aab5ec-829b-42dd-89db-74e28ab9346e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e7aab5ec-829b-42dd-89db-74e28ab9346e" (UID: "e7aab5ec-829b-42dd-89db-74e28ab9346e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.991751 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-sg-core-conf-yaml\") pod \"e7aab5ec-829b-42dd-89db-74e28ab9346e\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.992197 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-combined-ca-bundle\") pod \"e7aab5ec-829b-42dd-89db-74e28ab9346e\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.992324 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7aab5ec-829b-42dd-89db-74e28ab9346e-log-httpd\") pod \"e7aab5ec-829b-42dd-89db-74e28ab9346e\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.992791 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7aab5ec-829b-42dd-89db-74e28ab9346e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e7aab5ec-829b-42dd-89db-74e28ab9346e" (UID: "e7aab5ec-829b-42dd-89db-74e28ab9346e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.992889 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2576\" (UniqueName: \"kubernetes.io/projected/e7aab5ec-829b-42dd-89db-74e28ab9346e-kube-api-access-h2576\") pod \"e7aab5ec-829b-42dd-89db-74e28ab9346e\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.993028 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-config-data\") pod \"e7aab5ec-829b-42dd-89db-74e28ab9346e\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.993094 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-scripts\") pod \"e7aab5ec-829b-42dd-89db-74e28ab9346e\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.993657 4842 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7aab5ec-829b-42dd-89db-74e28ab9346e-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.993695 4842 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7aab5ec-829b-42dd-89db-74e28ab9346e-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.999280 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7aab5ec-829b-42dd-89db-74e28ab9346e-kube-api-access-h2576" (OuterVolumeSpecName: "kube-api-access-h2576") pod "e7aab5ec-829b-42dd-89db-74e28ab9346e" (UID: "e7aab5ec-829b-42dd-89db-74e28ab9346e"). InnerVolumeSpecName "kube-api-access-h2576". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.002454 4842 scope.go:117] "RemoveContainer" containerID="489c01ede4a0ab782872bdaed559698536c0754fc4c6b18af574f3dd700850cf" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.002710 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-scripts" (OuterVolumeSpecName: "scripts") pod "e7aab5ec-829b-42dd-89db-74e28ab9346e" (UID: "e7aab5ec-829b-42dd-89db-74e28ab9346e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.006688 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.015384 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.035596 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 02 07:06:20 crc kubenswrapper[4842]: E0202 07:06:20.035986 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerName="ceilometer-central-agent" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.035999 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerName="ceilometer-central-agent" Feb 02 07:06:20 crc kubenswrapper[4842]: E0202 07:06:20.036014 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccb5d691-9421-4007-8184-b3885f746622" containerName="cinder-api" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.036020 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccb5d691-9421-4007-8184-b3885f746622" containerName="cinder-api" Feb 02 07:06:20 crc kubenswrapper[4842]: E0202 07:06:20.036034 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerName="sg-core" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.036041 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerName="sg-core" Feb 02 07:06:20 crc kubenswrapper[4842]: E0202 07:06:20.036049 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerName="proxy-httpd" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.036056 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerName="proxy-httpd" Feb 02 07:06:20 crc kubenswrapper[4842]: E0202 07:06:20.036067 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerName="ceilometer-notification-agent" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.036072 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerName="ceilometer-notification-agent" Feb 02 07:06:20 crc kubenswrapper[4842]: E0202 07:06:20.036094 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccb5d691-9421-4007-8184-b3885f746622" containerName="cinder-api-log" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.036100 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccb5d691-9421-4007-8184-b3885f746622" containerName="cinder-api-log" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.036260 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccb5d691-9421-4007-8184-b3885f746622" containerName="cinder-api" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.036275 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerName="ceilometer-central-agent" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.036286 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccb5d691-9421-4007-8184-b3885f746622" containerName="cinder-api-log" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.036295 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerName="sg-core" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.036306 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerName="proxy-httpd" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.036317 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerName="ceilometer-notification-agent" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.037245 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.039845 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.039883 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.044925 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.040618 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.054112 4842 scope.go:117] "RemoveContainer" containerID="2f1f71359696d01a5862009ba293a284a700d2d113c3d648dd2fd55ef0a71132" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.054557 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e7aab5ec-829b-42dd-89db-74e28ab9346e" (UID: "e7aab5ec-829b-42dd-89db-74e28ab9346e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.088510 4842 scope.go:117] "RemoveContainer" containerID="fe3375b909f92cb4bbe73dec1a8b9dd6bf271192a5cfdeabb2b30b199ea28fc0" Feb 02 07:06:20 crc kubenswrapper[4842]: E0202 07:06:20.089079 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe3375b909f92cb4bbe73dec1a8b9dd6bf271192a5cfdeabb2b30b199ea28fc0\": container with ID starting with fe3375b909f92cb4bbe73dec1a8b9dd6bf271192a5cfdeabb2b30b199ea28fc0 not found: ID does not exist" containerID="fe3375b909f92cb4bbe73dec1a8b9dd6bf271192a5cfdeabb2b30b199ea28fc0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.089119 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe3375b909f92cb4bbe73dec1a8b9dd6bf271192a5cfdeabb2b30b199ea28fc0"} err="failed to get container status \"fe3375b909f92cb4bbe73dec1a8b9dd6bf271192a5cfdeabb2b30b199ea28fc0\": rpc error: code = NotFound desc = could not find container \"fe3375b909f92cb4bbe73dec1a8b9dd6bf271192a5cfdeabb2b30b199ea28fc0\": container with ID starting with fe3375b909f92cb4bbe73dec1a8b9dd6bf271192a5cfdeabb2b30b199ea28fc0 not found: ID does not exist" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.089148 4842 scope.go:117] "RemoveContainer" containerID="46a4ec7b1a2bf914002a2bbd86c470d96a9acddcc7f5c8732c24027d3a07b921" Feb 02 07:06:20 crc kubenswrapper[4842]: E0202 07:06:20.089515 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46a4ec7b1a2bf914002a2bbd86c470d96a9acddcc7f5c8732c24027d3a07b921\": container with ID starting with 46a4ec7b1a2bf914002a2bbd86c470d96a9acddcc7f5c8732c24027d3a07b921 not found: ID does not exist" containerID="46a4ec7b1a2bf914002a2bbd86c470d96a9acddcc7f5c8732c24027d3a07b921" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.089552 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46a4ec7b1a2bf914002a2bbd86c470d96a9acddcc7f5c8732c24027d3a07b921"} err="failed to get container status \"46a4ec7b1a2bf914002a2bbd86c470d96a9acddcc7f5c8732c24027d3a07b921\": rpc error: code = NotFound desc = could not find container \"46a4ec7b1a2bf914002a2bbd86c470d96a9acddcc7f5c8732c24027d3a07b921\": container with ID starting with 46a4ec7b1a2bf914002a2bbd86c470d96a9acddcc7f5c8732c24027d3a07b921 not found: ID does not exist" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.089581 4842 scope.go:117] "RemoveContainer" containerID="489c01ede4a0ab782872bdaed559698536c0754fc4c6b18af574f3dd700850cf" Feb 02 07:06:20 crc kubenswrapper[4842]: E0202 07:06:20.089841 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"489c01ede4a0ab782872bdaed559698536c0754fc4c6b18af574f3dd700850cf\": container with ID starting with 489c01ede4a0ab782872bdaed559698536c0754fc4c6b18af574f3dd700850cf not found: ID does not exist" containerID="489c01ede4a0ab782872bdaed559698536c0754fc4c6b18af574f3dd700850cf" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.089877 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"489c01ede4a0ab782872bdaed559698536c0754fc4c6b18af574f3dd700850cf"} err="failed to get container status \"489c01ede4a0ab782872bdaed559698536c0754fc4c6b18af574f3dd700850cf\": rpc error: code = NotFound desc = could not find container \"489c01ede4a0ab782872bdaed559698536c0754fc4c6b18af574f3dd700850cf\": container with ID starting with 489c01ede4a0ab782872bdaed559698536c0754fc4c6b18af574f3dd700850cf not found: ID does not exist" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.089898 4842 scope.go:117] "RemoveContainer" containerID="2f1f71359696d01a5862009ba293a284a700d2d113c3d648dd2fd55ef0a71132" Feb 02 07:06:20 crc kubenswrapper[4842]: E0202 07:06:20.090195 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f1f71359696d01a5862009ba293a284a700d2d113c3d648dd2fd55ef0a71132\": container with ID starting with 2f1f71359696d01a5862009ba293a284a700d2d113c3d648dd2fd55ef0a71132 not found: ID does not exist" containerID="2f1f71359696d01a5862009ba293a284a700d2d113c3d648dd2fd55ef0a71132" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.090226 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f1f71359696d01a5862009ba293a284a700d2d113c3d648dd2fd55ef0a71132"} err="failed to get container status \"2f1f71359696d01a5862009ba293a284a700d2d113c3d648dd2fd55ef0a71132\": rpc error: code = NotFound desc = could not find container \"2f1f71359696d01a5862009ba293a284a700d2d113c3d648dd2fd55ef0a71132\": container with ID starting with 2f1f71359696d01a5862009ba293a284a700d2d113c3d648dd2fd55ef0a71132 not found: ID does not exist" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.090241 4842 scope.go:117] "RemoveContainer" containerID="63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.095095 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2576\" (UniqueName: \"kubernetes.io/projected/e7aab5ec-829b-42dd-89db-74e28ab9346e-kube-api-access-h2576\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.095115 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.095124 4842 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.101363 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7aab5ec-829b-42dd-89db-74e28ab9346e" (UID: "e7aab5ec-829b-42dd-89db-74e28ab9346e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.110024 4842 scope.go:117] "RemoveContainer" containerID="0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.124301 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-config-data" (OuterVolumeSpecName: "config-data") pod "e7aab5ec-829b-42dd-89db-74e28ab9346e" (UID: "e7aab5ec-829b-42dd-89db-74e28ab9346e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.145413 4842 scope.go:117] "RemoveContainer" containerID="63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad" Feb 02 07:06:20 crc kubenswrapper[4842]: E0202 07:06:20.146071 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad\": container with ID starting with 63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad not found: ID does not exist" containerID="63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.146238 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad"} err="failed to get container status \"63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad\": rpc error: code = NotFound desc = could not find container \"63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad\": container with ID starting with 63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad not found: ID does not exist" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.146265 4842 scope.go:117] "RemoveContainer" containerID="0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547" Feb 02 07:06:20 crc kubenswrapper[4842]: E0202 07:06:20.146553 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547\": container with ID starting with 0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547 not found: ID does not exist" containerID="0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.146574 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547"} err="failed to get container status \"0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547\": rpc error: code = NotFound desc = could not find container \"0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547\": container with ID starting with 0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547 not found: ID does not exist" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.146595 4842 scope.go:117] "RemoveContainer" containerID="63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.146794 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad"} err="failed to get container status \"63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad\": rpc error: code = NotFound desc = could not find container \"63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad\": container with ID starting with 63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad not found: ID does not exist" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.146814 4842 scope.go:117] "RemoveContainer" containerID="0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.147019 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547"} err="failed to get container status \"0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547\": rpc error: code = NotFound desc = could not find container \"0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547\": container with ID starting with 0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547 not found: ID does not exist" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.196570 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.196612 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/900b2d20-01c8-47e0-8271-ccfd8549d468-logs\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.196726 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-scripts\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.196757 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fmp4\" (UniqueName: \"kubernetes.io/projected/900b2d20-01c8-47e0-8271-ccfd8549d468-kube-api-access-4fmp4\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.196890 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/900b2d20-01c8-47e0-8271-ccfd8549d468-etc-machine-id\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.196958 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.197033 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-config-data\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.197080 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-public-tls-certs\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.197259 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-config-data-custom\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.197385 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.197406 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.295917 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.299747 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.299889 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/900b2d20-01c8-47e0-8271-ccfd8549d468-logs\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.300634 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-scripts\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.300539 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/900b2d20-01c8-47e0-8271-ccfd8549d468-logs\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.301371 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fmp4\" (UniqueName: \"kubernetes.io/projected/900b2d20-01c8-47e0-8271-ccfd8549d468-kube-api-access-4fmp4\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.301595 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/900b2d20-01c8-47e0-8271-ccfd8549d468-etc-machine-id\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.301677 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/900b2d20-01c8-47e0-8271-ccfd8549d468-etc-machine-id\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.301808 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.301931 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-config-data\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.302085 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-public-tls-certs\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.303370 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-config-data-custom\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.305916 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-scripts\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.306057 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.307547 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-public-tls-certs\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.307964 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.308012 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-config-data-custom\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.308990 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-config-data\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.326320 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fmp4\" (UniqueName: \"kubernetes.io/projected/900b2d20-01c8-47e0-8271-ccfd8549d468-kube-api-access-4fmp4\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.342255 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.368457 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.375596 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.380352 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.383852 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.384153 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.394550 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.506543 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.506584 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0636bdd6-0d17-4f9b-9031-663dfb98f672-run-httpd\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.506754 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-config-data\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.506778 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0636bdd6-0d17-4f9b-9031-663dfb98f672-log-httpd\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.506828 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-scripts\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.506917 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.507043 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf6fm\" (UniqueName: \"kubernetes.io/projected/0636bdd6-0d17-4f9b-9031-663dfb98f672-kube-api-access-hf6fm\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.563077 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.608733 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf6fm\" (UniqueName: \"kubernetes.io/projected/0636bdd6-0d17-4f9b-9031-663dfb98f672-kube-api-access-hf6fm\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.608842 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.608899 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0636bdd6-0d17-4f9b-9031-663dfb98f672-run-httpd\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.608960 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-config-data\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.608985 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0636bdd6-0d17-4f9b-9031-663dfb98f672-log-httpd\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.609010 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-scripts\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.609060 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.609467 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0636bdd6-0d17-4f9b-9031-663dfb98f672-run-httpd\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.609508 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0636bdd6-0d17-4f9b-9031-663dfb98f672-log-httpd\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.613023 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.615278 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.615550 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-scripts\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.615899 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-config-data\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.626862 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf6fm\" (UniqueName: \"kubernetes.io/projected/0636bdd6-0d17-4f9b-9031-663dfb98f672-kube-api-access-hf6fm\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.717158 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.834063 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 07:06:20 crc kubenswrapper[4842]: W0202 07:06:20.838973 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod900b2d20_01c8_47e0_8271_ccfd8549d468.slice/crio-f8428d2a8e93132509de41794f4b8946214003b09ad9c320fa782cef8d54fe76 WatchSource:0}: Error finding container f8428d2a8e93132509de41794f4b8946214003b09ad9c320fa782cef8d54fe76: Status 404 returned error can't find the container with id f8428d2a8e93132509de41794f4b8946214003b09ad9c320fa782cef8d54fe76 Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.970278 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"900b2d20-01c8-47e0-8271-ccfd8549d468","Type":"ContainerStarted","Data":"f8428d2a8e93132509de41794f4b8946214003b09ad9c320fa782cef8d54fe76"} Feb 02 07:06:21 crc kubenswrapper[4842]: I0202 07:06:21.055577 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:21 crc kubenswrapper[4842]: W0202 07:06:21.060176 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0636bdd6_0d17_4f9b_9031_663dfb98f672.slice/crio-2332347c0d70878870bc3cca3315995176808c8257ccc12723509cbb8433193f WatchSource:0}: Error finding container 2332347c0d70878870bc3cca3315995176808c8257ccc12723509cbb8433193f: Status 404 returned error can't find the container with id 2332347c0d70878870bc3cca3315995176808c8257ccc12723509cbb8433193f Feb 02 07:06:21 crc kubenswrapper[4842]: I0202 07:06:21.454916 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccb5d691-9421-4007-8184-b3885f746622" path="/var/lib/kubelet/pods/ccb5d691-9421-4007-8184-b3885f746622/volumes" Feb 02 07:06:21 crc kubenswrapper[4842]: I0202 07:06:21.461739 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" path="/var/lib/kubelet/pods/e7aab5ec-829b-42dd-89db-74e28ab9346e/volumes" Feb 02 07:06:21 crc kubenswrapper[4842]: I0202 07:06:21.979621 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"900b2d20-01c8-47e0-8271-ccfd8549d468","Type":"ContainerStarted","Data":"bd926e0b40deedf62e76e58772126de2d573692a9f905d9665b40c94008fd070"} Feb 02 07:06:21 crc kubenswrapper[4842]: I0202 07:06:21.981690 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0636bdd6-0d17-4f9b-9031-663dfb98f672","Type":"ContainerStarted","Data":"0275ebaf83cd1dc6f0f1e530a2520ae303911995fcb24e0ce6bb618355448ca7"} Feb 02 07:06:21 crc kubenswrapper[4842]: I0202 07:06:21.981731 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0636bdd6-0d17-4f9b-9031-663dfb98f672","Type":"ContainerStarted","Data":"2332347c0d70878870bc3cca3315995176808c8257ccc12723509cbb8433193f"} Feb 02 07:06:21 crc kubenswrapper[4842]: I0202 07:06:21.983973 4842 generic.go:334] "Generic (PLEG): container finished" podID="ac50621f-67cd-441d-99ea-6839f7f3b556" containerID="589698e8022a3b189f2a3e9dad2ee18b515cc75e38ef79e256cca8b969f22e6f" exitCode=0 Feb 02 07:06:21 crc kubenswrapper[4842]: I0202 07:06:21.984018 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-578f976b4-mj2qx" event={"ID":"ac50621f-67cd-441d-99ea-6839f7f3b556","Type":"ContainerDied","Data":"589698e8022a3b189f2a3e9dad2ee18b515cc75e38ef79e256cca8b969f22e6f"} Feb 02 07:06:22 crc kubenswrapper[4842]: I0202 07:06:22.249364 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:22 crc kubenswrapper[4842]: I0202 07:06:22.348564 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac50621f-67cd-441d-99ea-6839f7f3b556-logs\") pod \"ac50621f-67cd-441d-99ea-6839f7f3b556\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " Feb 02 07:06:22 crc kubenswrapper[4842]: I0202 07:06:22.348925 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac50621f-67cd-441d-99ea-6839f7f3b556-logs" (OuterVolumeSpecName: "logs") pod "ac50621f-67cd-441d-99ea-6839f7f3b556" (UID: "ac50621f-67cd-441d-99ea-6839f7f3b556"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:06:22 crc kubenswrapper[4842]: I0202 07:06:22.348935 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-config-data-custom\") pod \"ac50621f-67cd-441d-99ea-6839f7f3b556\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " Feb 02 07:06:22 crc kubenswrapper[4842]: I0202 07:06:22.349059 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xs4k8\" (UniqueName: \"kubernetes.io/projected/ac50621f-67cd-441d-99ea-6839f7f3b556-kube-api-access-xs4k8\") pod \"ac50621f-67cd-441d-99ea-6839f7f3b556\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " Feb 02 07:06:22 crc kubenswrapper[4842]: I0202 07:06:22.349103 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-config-data\") pod \"ac50621f-67cd-441d-99ea-6839f7f3b556\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " Feb 02 07:06:22 crc kubenswrapper[4842]: I0202 07:06:22.349151 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-combined-ca-bundle\") pod \"ac50621f-67cd-441d-99ea-6839f7f3b556\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " Feb 02 07:06:22 crc kubenswrapper[4842]: I0202 07:06:22.349932 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac50621f-67cd-441d-99ea-6839f7f3b556-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:22 crc kubenswrapper[4842]: I0202 07:06:22.355608 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac50621f-67cd-441d-99ea-6839f7f3b556-kube-api-access-xs4k8" (OuterVolumeSpecName: "kube-api-access-xs4k8") pod "ac50621f-67cd-441d-99ea-6839f7f3b556" (UID: "ac50621f-67cd-441d-99ea-6839f7f3b556"). InnerVolumeSpecName "kube-api-access-xs4k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:22 crc kubenswrapper[4842]: I0202 07:06:22.363771 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ac50621f-67cd-441d-99ea-6839f7f3b556" (UID: "ac50621f-67cd-441d-99ea-6839f7f3b556"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:22 crc kubenswrapper[4842]: I0202 07:06:22.399293 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac50621f-67cd-441d-99ea-6839f7f3b556" (UID: "ac50621f-67cd-441d-99ea-6839f7f3b556"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:22 crc kubenswrapper[4842]: I0202 07:06:22.425332 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-config-data" (OuterVolumeSpecName: "config-data") pod "ac50621f-67cd-441d-99ea-6839f7f3b556" (UID: "ac50621f-67cd-441d-99ea-6839f7f3b556"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:22 crc kubenswrapper[4842]: I0202 07:06:22.451759 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:22 crc kubenswrapper[4842]: I0202 07:06:22.451981 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:22 crc kubenswrapper[4842]: I0202 07:06:22.452091 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:22 crc kubenswrapper[4842]: I0202 07:06:22.452176 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xs4k8\" (UniqueName: \"kubernetes.io/projected/ac50621f-67cd-441d-99ea-6839f7f3b556-kube-api-access-xs4k8\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.001888 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"900b2d20-01c8-47e0-8271-ccfd8549d468","Type":"ContainerStarted","Data":"35494b429ef02861ccac7eb4515711429c34dfc143b4a511f2c7253734f037ab"} Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.002687 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.005274 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0636bdd6-0d17-4f9b-9031-663dfb98f672","Type":"ContainerStarted","Data":"80e2b283fa7d6732f1ee502cb45ba016aee0bc6094fa574b3e9b062a5cb23a5c"} Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.006918 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-578f976b4-mj2qx" event={"ID":"ac50621f-67cd-441d-99ea-6839f7f3b556","Type":"ContainerDied","Data":"5c5a9a9e1c050c799b792ac4b78f2284f4eae1bc563dc03d2fe56329e1ad0873"} Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.006949 4842 scope.go:117] "RemoveContainer" containerID="589698e8022a3b189f2a3e9dad2ee18b515cc75e38ef79e256cca8b969f22e6f" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.007071 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.032804 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.032784124 podStartE2EDuration="4.032784124s" podCreationTimestamp="2026-02-02 07:06:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:06:23.026191002 +0000 UTC m=+1208.403458954" watchObservedRunningTime="2026-02-02 07:06:23.032784124 +0000 UTC m=+1208.410052036" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.054362 4842 scope.go:117] "RemoveContainer" containerID="2aaca1b2bb1165d98216c87b7292187d66c8775a2542b31141a6399a0f020777" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.086331 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-578f976b4-mj2qx"] Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.099961 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-578f976b4-mj2qx"] Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.287967 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.446209 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac50621f-67cd-441d-99ea-6839f7f3b556" path="/var/lib/kubelet/pods/ac50621f-67cd-441d-99ea-6839f7f3b556/volumes" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.548252 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6fcc587c45-x7h24"] Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.548498 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6fcc587c45-x7h24" podUID="3aaab28f-fb61-4600-b66f-a485ca345112" containerName="neutron-api" containerID="cri-o://b303529aa7f40b97ddac015c60fbc643d3194166e20eda9000a91d5e375c56d6" gracePeriod=30 Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.548752 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6fcc587c45-x7h24" podUID="3aaab28f-fb61-4600-b66f-a485ca345112" containerName="neutron-httpd" containerID="cri-o://ca6552ce5887f06f32bb03e339a3e9124e1fa65f5a80acb32717eb27f56d3775" gracePeriod=30 Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.553904 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6fcc587c45-x7h24" podUID="3aaab28f-fb61-4600-b66f-a485ca345112" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.152:9696/\": EOF" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.574804 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6684555597-gjtgz"] Feb 02 07:06:23 crc kubenswrapper[4842]: E0202 07:06:23.575165 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac50621f-67cd-441d-99ea-6839f7f3b556" containerName="barbican-api-log" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.575176 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac50621f-67cd-441d-99ea-6839f7f3b556" containerName="barbican-api-log" Feb 02 07:06:23 crc kubenswrapper[4842]: E0202 07:06:23.575192 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac50621f-67cd-441d-99ea-6839f7f3b556" containerName="barbican-api" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.575198 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac50621f-67cd-441d-99ea-6839f7f3b556" containerName="barbican-api" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.575375 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac50621f-67cd-441d-99ea-6839f7f3b556" containerName="barbican-api" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.575398 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac50621f-67cd-441d-99ea-6839f7f3b556" containerName="barbican-api-log" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.576269 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.591337 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6684555597-gjtgz"] Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.679596 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-internal-tls-certs\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.679659 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-config\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.679677 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-ovndb-tls-certs\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.679752 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj647\" (UniqueName: \"kubernetes.io/projected/953bf671-ca79-4208-9bab-672dc079dd82-kube-api-access-wj647\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.679772 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-httpd-config\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.679793 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-public-tls-certs\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.679813 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-combined-ca-bundle\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.781482 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-internal-tls-certs\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.781535 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-config\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.781553 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-ovndb-tls-certs\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.781632 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj647\" (UniqueName: \"kubernetes.io/projected/953bf671-ca79-4208-9bab-672dc079dd82-kube-api-access-wj647\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.781651 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-httpd-config\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.781670 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-public-tls-certs\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.781689 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-combined-ca-bundle\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.788375 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-httpd-config\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.790511 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-ovndb-tls-certs\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.791814 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-internal-tls-certs\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.794108 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-config\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.801899 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-public-tls-certs\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.805827 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj647\" (UniqueName: \"kubernetes.io/projected/953bf671-ca79-4208-9bab-672dc079dd82-kube-api-access-wj647\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.836032 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-combined-ca-bundle\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.891222 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:24 crc kubenswrapper[4842]: I0202 07:06:24.034550 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0636bdd6-0d17-4f9b-9031-663dfb98f672","Type":"ContainerStarted","Data":"65fe3e72ea38c1f2d2b6b3a6c420618912dad1d016bd4f786028a45d00817ad9"} Feb 02 07:06:24 crc kubenswrapper[4842]: I0202 07:06:24.038054 4842 generic.go:334] "Generic (PLEG): container finished" podID="3aaab28f-fb61-4600-b66f-a485ca345112" containerID="ca6552ce5887f06f32bb03e339a3e9124e1fa65f5a80acb32717eb27f56d3775" exitCode=0 Feb 02 07:06:24 crc kubenswrapper[4842]: I0202 07:06:24.038194 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fcc587c45-x7h24" event={"ID":"3aaab28f-fb61-4600-b66f-a485ca345112","Type":"ContainerDied","Data":"ca6552ce5887f06f32bb03e339a3e9124e1fa65f5a80acb32717eb27f56d3775"} Feb 02 07:06:24 crc kubenswrapper[4842]: I0202 07:06:24.509676 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6684555597-gjtgz"] Feb 02 07:06:25 crc kubenswrapper[4842]: I0202 07:06:25.077337 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6684555597-gjtgz" event={"ID":"953bf671-ca79-4208-9bab-672dc079dd82","Type":"ContainerStarted","Data":"69048ee01a49fa4ed888b0c135134e06af01f907b56780330edbc72e09136e83"} Feb 02 07:06:25 crc kubenswrapper[4842]: I0202 07:06:25.077734 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6684555597-gjtgz" event={"ID":"953bf671-ca79-4208-9bab-672dc079dd82","Type":"ContainerStarted","Data":"679d0126323f1cafc695474001597b9d37c1a23ba5158a00e7f240fffa003eca"} Feb 02 07:06:25 crc kubenswrapper[4842]: I0202 07:06:25.077762 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6684555597-gjtgz" event={"ID":"953bf671-ca79-4208-9bab-672dc079dd82","Type":"ContainerStarted","Data":"642e7ab1c818fa3e0857124b890ed7f6355271588ac21bdb99c64d978b7374b0"} Feb 02 07:06:25 crc kubenswrapper[4842]: I0202 07:06:25.079475 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:25 crc kubenswrapper[4842]: I0202 07:06:25.113930 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6684555597-gjtgz" podStartSLOduration=2.113902098 podStartE2EDuration="2.113902098s" podCreationTimestamp="2026-02-02 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:06:25.105708796 +0000 UTC m=+1210.482976738" watchObservedRunningTime="2026-02-02 07:06:25.113902098 +0000 UTC m=+1210.491170050" Feb 02 07:06:25 crc kubenswrapper[4842]: I0202 07:06:25.551387 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:25 crc kubenswrapper[4842]: I0202 07:06:25.603274 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6fcc587c45-x7h24" podUID="3aaab28f-fb61-4600-b66f-a485ca345112" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.152:9696/\": dial tcp 10.217.0.152:9696: connect: connection refused" Feb 02 07:06:25 crc kubenswrapper[4842]: I0202 07:06:25.721508 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-hdddb"] Feb 02 07:06:25 crc kubenswrapper[4842]: I0202 07:06:25.722006 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" podUID="595bc2a4-f0b8-4930-8c66-3b3da4cc4630" containerName="dnsmasq-dns" containerID="cri-o://053391fc9b848177ff3e50865d7e17cdfe73b462de9b2367e66796f0824df117" gracePeriod=10 Feb 02 07:06:25 crc kubenswrapper[4842]: E0202 07:06:25.882255 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod595bc2a4_f0b8_4930_8c66_3b3da4cc4630.slice/crio-conmon-053391fc9b848177ff3e50865d7e17cdfe73b462de9b2367e66796f0824df117.scope\": RecentStats: unable to find data in memory cache]" Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.063027 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.102236 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0636bdd6-0d17-4f9b-9031-663dfb98f672","Type":"ContainerStarted","Data":"9fd61c4357d65c3104ccc6627ce5c120ccaf3a3a092c30986f1996894ba11d04"} Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.105150 4842 generic.go:334] "Generic (PLEG): container finished" podID="595bc2a4-f0b8-4930-8c66-3b3da4cc4630" containerID="053391fc9b848177ff3e50865d7e17cdfe73b462de9b2367e66796f0824df117" exitCode=0 Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.105235 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" event={"ID":"595bc2a4-f0b8-4930-8c66-3b3da4cc4630","Type":"ContainerDied","Data":"053391fc9b848177ff3e50865d7e17cdfe73b462de9b2367e66796f0824df117"} Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.148375 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.148660 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d737380b-08d3-455f-a9a7-080d76cabc9f" containerName="cinder-scheduler" containerID="cri-o://2c8ee50e4f65881fd7304ba6c36f7a3d6a7b1ea6446992c1865f5077f7b9fd3b" gracePeriod=30 Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.149152 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d737380b-08d3-455f-a9a7-080d76cabc9f" containerName="probe" containerID="cri-o://54284a46ac09d894f4ded8d4490b29e31ca3f5c624e7f4069d128d4f574ec681" gracePeriod=30 Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.169109 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.057823247 podStartE2EDuration="6.169084181s" podCreationTimestamp="2026-02-02 07:06:20 +0000 UTC" firstStartedPulling="2026-02-02 07:06:21.062433827 +0000 UTC m=+1206.439701739" lastFinishedPulling="2026-02-02 07:06:25.173694761 +0000 UTC m=+1210.550962673" observedRunningTime="2026-02-02 07:06:26.139449001 +0000 UTC m=+1211.516716913" watchObservedRunningTime="2026-02-02 07:06:26.169084181 +0000 UTC m=+1211.546352093" Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.343161 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.461888 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-dns-swift-storage-0\") pod \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.461983 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-ovsdbserver-nb\") pod \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.462050 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-config\") pod \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.462121 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-dns-svc\") pod \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.462164 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r84hx\" (UniqueName: \"kubernetes.io/projected/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-kube-api-access-r84hx\") pod \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.462228 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-ovsdbserver-sb\") pod \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.473353 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-kube-api-access-r84hx" (OuterVolumeSpecName: "kube-api-access-r84hx") pod "595bc2a4-f0b8-4930-8c66-3b3da4cc4630" (UID: "595bc2a4-f0b8-4930-8c66-3b3da4cc4630"). InnerVolumeSpecName "kube-api-access-r84hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.507090 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "595bc2a4-f0b8-4930-8c66-3b3da4cc4630" (UID: "595bc2a4-f0b8-4930-8c66-3b3da4cc4630"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.512553 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "595bc2a4-f0b8-4930-8c66-3b3da4cc4630" (UID: "595bc2a4-f0b8-4930-8c66-3b3da4cc4630"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.512817 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-config" (OuterVolumeSpecName: "config") pod "595bc2a4-f0b8-4930-8c66-3b3da4cc4630" (UID: "595bc2a4-f0b8-4930-8c66-3b3da4cc4630"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.520801 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "595bc2a4-f0b8-4930-8c66-3b3da4cc4630" (UID: "595bc2a4-f0b8-4930-8c66-3b3da4cc4630"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.531115 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "595bc2a4-f0b8-4930-8c66-3b3da4cc4630" (UID: "595bc2a4-f0b8-4930-8c66-3b3da4cc4630"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.564839 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r84hx\" (UniqueName: \"kubernetes.io/projected/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-kube-api-access-r84hx\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.564866 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.564875 4842 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.564886 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.564896 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.564906 4842 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:27 crc kubenswrapper[4842]: I0202 07:06:27.115812 4842 generic.go:334] "Generic (PLEG): container finished" podID="d737380b-08d3-455f-a9a7-080d76cabc9f" containerID="54284a46ac09d894f4ded8d4490b29e31ca3f5c624e7f4069d128d4f574ec681" exitCode=0 Feb 02 07:06:27 crc kubenswrapper[4842]: I0202 07:06:27.115893 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d737380b-08d3-455f-a9a7-080d76cabc9f","Type":"ContainerDied","Data":"54284a46ac09d894f4ded8d4490b29e31ca3f5c624e7f4069d128d4f574ec681"} Feb 02 07:06:27 crc kubenswrapper[4842]: I0202 07:06:27.118727 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" event={"ID":"595bc2a4-f0b8-4930-8c66-3b3da4cc4630","Type":"ContainerDied","Data":"f735cc0c0ef98cb5751b1343a0d1aca16cf6fb764a0966b2ebc18ac2392a9b7d"} Feb 02 07:06:27 crc kubenswrapper[4842]: I0202 07:06:27.118794 4842 scope.go:117] "RemoveContainer" containerID="053391fc9b848177ff3e50865d7e17cdfe73b462de9b2367e66796f0824df117" Feb 02 07:06:27 crc kubenswrapper[4842]: I0202 07:06:27.118942 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:27 crc kubenswrapper[4842]: I0202 07:06:27.119130 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 07:06:27 crc kubenswrapper[4842]: I0202 07:06:27.142758 4842 scope.go:117] "RemoveContainer" containerID="b697a77798b314f9ac4ee3c53ca23704430e0f4eccb0fe586772468c61943fe2" Feb 02 07:06:27 crc kubenswrapper[4842]: I0202 07:06:27.155840 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-hdddb"] Feb 02 07:06:27 crc kubenswrapper[4842]: I0202 07:06:27.164190 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-hdddb"] Feb 02 07:06:27 crc kubenswrapper[4842]: I0202 07:06:27.448627 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="595bc2a4-f0b8-4930-8c66-3b3da4cc4630" path="/var/lib/kubelet/pods/595bc2a4-f0b8-4930-8c66-3b3da4cc4630/volumes" Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.683898 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.832790 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-scripts\") pod \"d737380b-08d3-455f-a9a7-080d76cabc9f\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.832839 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d737380b-08d3-455f-a9a7-080d76cabc9f-etc-machine-id\") pod \"d737380b-08d3-455f-a9a7-080d76cabc9f\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.832892 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-config-data-custom\") pod \"d737380b-08d3-455f-a9a7-080d76cabc9f\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.832947 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-combined-ca-bundle\") pod \"d737380b-08d3-455f-a9a7-080d76cabc9f\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.832994 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw2ng\" (UniqueName: \"kubernetes.io/projected/d737380b-08d3-455f-a9a7-080d76cabc9f-kube-api-access-gw2ng\") pod \"d737380b-08d3-455f-a9a7-080d76cabc9f\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.833065 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-config-data\") pod \"d737380b-08d3-455f-a9a7-080d76cabc9f\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.836765 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d737380b-08d3-455f-a9a7-080d76cabc9f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d737380b-08d3-455f-a9a7-080d76cabc9f" (UID: "d737380b-08d3-455f-a9a7-080d76cabc9f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.839580 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d737380b-08d3-455f-a9a7-080d76cabc9f" (UID: "d737380b-08d3-455f-a9a7-080d76cabc9f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.840058 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-scripts" (OuterVolumeSpecName: "scripts") pod "d737380b-08d3-455f-a9a7-080d76cabc9f" (UID: "d737380b-08d3-455f-a9a7-080d76cabc9f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.842753 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d737380b-08d3-455f-a9a7-080d76cabc9f-kube-api-access-gw2ng" (OuterVolumeSpecName: "kube-api-access-gw2ng") pod "d737380b-08d3-455f-a9a7-080d76cabc9f" (UID: "d737380b-08d3-455f-a9a7-080d76cabc9f"). InnerVolumeSpecName "kube-api-access-gw2ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.894319 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d737380b-08d3-455f-a9a7-080d76cabc9f" (UID: "d737380b-08d3-455f-a9a7-080d76cabc9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.935115 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.935143 4842 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d737380b-08d3-455f-a9a7-080d76cabc9f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.935153 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.935161 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.935170 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gw2ng\" (UniqueName: \"kubernetes.io/projected/d737380b-08d3-455f-a9a7-080d76cabc9f-kube-api-access-gw2ng\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.965502 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-config-data" (OuterVolumeSpecName: "config-data") pod "d737380b-08d3-455f-a9a7-080d76cabc9f" (UID: "d737380b-08d3-455f-a9a7-080d76cabc9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.037199 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.146753 4842 generic.go:334] "Generic (PLEG): container finished" podID="d737380b-08d3-455f-a9a7-080d76cabc9f" containerID="2c8ee50e4f65881fd7304ba6c36f7a3d6a7b1ea6446992c1865f5077f7b9fd3b" exitCode=0 Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.146796 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d737380b-08d3-455f-a9a7-080d76cabc9f","Type":"ContainerDied","Data":"2c8ee50e4f65881fd7304ba6c36f7a3d6a7b1ea6446992c1865f5077f7b9fd3b"} Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.146826 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d737380b-08d3-455f-a9a7-080d76cabc9f","Type":"ContainerDied","Data":"448240e5421a87237dad04890b2a4f40bc671d8ec2cf606c184317a141cf69db"} Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.146842 4842 scope.go:117] "RemoveContainer" containerID="54284a46ac09d894f4ded8d4490b29e31ca3f5c624e7f4069d128d4f574ec681" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.146842 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.193153 4842 scope.go:117] "RemoveContainer" containerID="2c8ee50e4f65881fd7304ba6c36f7a3d6a7b1ea6446992c1865f5077f7b9fd3b" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.214440 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.220761 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.230354 4842 scope.go:117] "RemoveContainer" containerID="54284a46ac09d894f4ded8d4490b29e31ca3f5c624e7f4069d128d4f574ec681" Feb 02 07:06:30 crc kubenswrapper[4842]: E0202 07:06:30.230764 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54284a46ac09d894f4ded8d4490b29e31ca3f5c624e7f4069d128d4f574ec681\": container with ID starting with 54284a46ac09d894f4ded8d4490b29e31ca3f5c624e7f4069d128d4f574ec681 not found: ID does not exist" containerID="54284a46ac09d894f4ded8d4490b29e31ca3f5c624e7f4069d128d4f574ec681" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.230793 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54284a46ac09d894f4ded8d4490b29e31ca3f5c624e7f4069d128d4f574ec681"} err="failed to get container status \"54284a46ac09d894f4ded8d4490b29e31ca3f5c624e7f4069d128d4f574ec681\": rpc error: code = NotFound desc = could not find container \"54284a46ac09d894f4ded8d4490b29e31ca3f5c624e7f4069d128d4f574ec681\": container with ID starting with 54284a46ac09d894f4ded8d4490b29e31ca3f5c624e7f4069d128d4f574ec681 not found: ID does not exist" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.230813 4842 scope.go:117] "RemoveContainer" containerID="2c8ee50e4f65881fd7304ba6c36f7a3d6a7b1ea6446992c1865f5077f7b9fd3b" Feb 02 07:06:30 crc kubenswrapper[4842]: E0202 07:06:30.231630 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c8ee50e4f65881fd7304ba6c36f7a3d6a7b1ea6446992c1865f5077f7b9fd3b\": container with ID starting with 2c8ee50e4f65881fd7304ba6c36f7a3d6a7b1ea6446992c1865f5077f7b9fd3b not found: ID does not exist" containerID="2c8ee50e4f65881fd7304ba6c36f7a3d6a7b1ea6446992c1865f5077f7b9fd3b" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.231662 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c8ee50e4f65881fd7304ba6c36f7a3d6a7b1ea6446992c1865f5077f7b9fd3b"} err="failed to get container status \"2c8ee50e4f65881fd7304ba6c36f7a3d6a7b1ea6446992c1865f5077f7b9fd3b\": rpc error: code = NotFound desc = could not find container \"2c8ee50e4f65881fd7304ba6c36f7a3d6a7b1ea6446992c1865f5077f7b9fd3b\": container with ID starting with 2c8ee50e4f65881fd7304ba6c36f7a3d6a7b1ea6446992c1865f5077f7b9fd3b not found: ID does not exist" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.235826 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 07:06:30 crc kubenswrapper[4842]: E0202 07:06:30.236315 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d737380b-08d3-455f-a9a7-080d76cabc9f" containerName="cinder-scheduler" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.236333 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="d737380b-08d3-455f-a9a7-080d76cabc9f" containerName="cinder-scheduler" Feb 02 07:06:30 crc kubenswrapper[4842]: E0202 07:06:30.236350 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d737380b-08d3-455f-a9a7-080d76cabc9f" containerName="probe" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.236356 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="d737380b-08d3-455f-a9a7-080d76cabc9f" containerName="probe" Feb 02 07:06:30 crc kubenswrapper[4842]: E0202 07:06:30.236394 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="595bc2a4-f0b8-4930-8c66-3b3da4cc4630" containerName="dnsmasq-dns" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.236402 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="595bc2a4-f0b8-4930-8c66-3b3da4cc4630" containerName="dnsmasq-dns" Feb 02 07:06:30 crc kubenswrapper[4842]: E0202 07:06:30.236415 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="595bc2a4-f0b8-4930-8c66-3b3da4cc4630" containerName="init" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.236423 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="595bc2a4-f0b8-4930-8c66-3b3da4cc4630" containerName="init" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.236654 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="d737380b-08d3-455f-a9a7-080d76cabc9f" containerName="cinder-scheduler" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.236670 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="595bc2a4-f0b8-4930-8c66-3b3da4cc4630" containerName="dnsmasq-dns" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.236680 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="d737380b-08d3-455f-a9a7-080d76cabc9f" containerName="probe" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.241706 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.251030 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.252367 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.345150 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.345249 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-config-data\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.345398 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-scripts\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.345468 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/115a51a9-6125-46e1-a960-a66cb9957d38-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.345538 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmstk\" (UniqueName: \"kubernetes.io/projected/115a51a9-6125-46e1-a960-a66cb9957d38-kube-api-access-wmstk\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.345781 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.447858 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.447935 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.447967 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-config-data\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.448034 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-scripts\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.448073 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/115a51a9-6125-46e1-a960-a66cb9957d38-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.448114 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmstk\" (UniqueName: \"kubernetes.io/projected/115a51a9-6125-46e1-a960-a66cb9957d38-kube-api-access-wmstk\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.449150 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/115a51a9-6125-46e1-a960-a66cb9957d38-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.453341 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.453547 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-config-data\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.454201 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.454619 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-scripts\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.468898 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmstk\" (UniqueName: \"kubernetes.io/projected/115a51a9-6125-46e1-a960-a66cb9957d38-kube-api-access-wmstk\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.567937 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 07:06:31 crc kubenswrapper[4842]: W0202 07:06:31.032027 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod115a51a9_6125_46e1_a960_a66cb9957d38.slice/crio-d9adaa71516bc7f37ff65b80add9138abcfd4cb747d204e8aa686e59e5b9af28 WatchSource:0}: Error finding container d9adaa71516bc7f37ff65b80add9138abcfd4cb747d204e8aa686e59e5b9af28: Status 404 returned error can't find the container with id d9adaa71516bc7f37ff65b80add9138abcfd4cb747d204e8aa686e59e5b9af28 Feb 02 07:06:31 crc kubenswrapper[4842]: I0202 07:06:31.042661 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 07:06:31 crc kubenswrapper[4842]: I0202 07:06:31.157431 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"115a51a9-6125-46e1-a960-a66cb9957d38","Type":"ContainerStarted","Data":"d9adaa71516bc7f37ff65b80add9138abcfd4cb747d204e8aa686e59e5b9af28"} Feb 02 07:06:31 crc kubenswrapper[4842]: I0202 07:06:31.444360 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d737380b-08d3-455f-a9a7-080d76cabc9f" path="/var/lib/kubelet/pods/d737380b-08d3-455f-a9a7-080d76cabc9f/volumes" Feb 02 07:06:32 crc kubenswrapper[4842]: I0202 07:06:32.152720 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 02 07:06:32 crc kubenswrapper[4842]: I0202 07:06:32.197469 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"115a51a9-6125-46e1-a960-a66cb9957d38","Type":"ContainerStarted","Data":"092ec23856ddf7c87f1db2b8f8dedaf3b76e7104cefaca2c00891af5dbd0e8ec"} Feb 02 07:06:33 crc kubenswrapper[4842]: I0202 07:06:33.207807 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"115a51a9-6125-46e1-a960-a66cb9957d38","Type":"ContainerStarted","Data":"bfc6d5e3d20fcf147f2a351ad85a3e522f9d2e24e1de0ae3e5b2d48bdc682cbf"} Feb 02 07:06:33 crc kubenswrapper[4842]: I0202 07:06:33.231670 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.231653235 podStartE2EDuration="3.231653235s" podCreationTimestamp="2026-02-02 07:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:06:33.224679433 +0000 UTC m=+1218.601947355" watchObservedRunningTime="2026-02-02 07:06:33.231653235 +0000 UTC m=+1218.608921147" Feb 02 07:06:34 crc kubenswrapper[4842]: I0202 07:06:34.105618 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:34 crc kubenswrapper[4842]: I0202 07:06:34.113039 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:34 crc kubenswrapper[4842]: I0202 07:06:34.367460 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:34 crc kubenswrapper[4842]: I0202 07:06:34.451330 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:34 crc kubenswrapper[4842]: I0202 07:06:34.679046 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:34 crc kubenswrapper[4842]: I0202 07:06:34.746069 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-697d496d6b-bz7zg"] Feb 02 07:06:35 crc kubenswrapper[4842]: I0202 07:06:35.225585 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-697d496d6b-bz7zg" podUID="726c1772-2536-414e-a6ce-9c1437b021d1" containerName="placement-log" containerID="cri-o://dc6d91d0986b64e793e6b5ee027d9ab62f264d291e919b8d22ff5580bd033fbe" gracePeriod=30 Feb 02 07:06:35 crc kubenswrapper[4842]: I0202 07:06:35.225685 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-697d496d6b-bz7zg" podUID="726c1772-2536-414e-a6ce-9c1437b021d1" containerName="placement-api" containerID="cri-o://82a543d3d9cc00e4f8309fbaaed6e12bd0276e8a75a5a75d05dfd12644dff786" gracePeriod=30 Feb 02 07:06:35 crc kubenswrapper[4842]: I0202 07:06:35.568546 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 02 07:06:36 crc kubenswrapper[4842]: I0202 07:06:36.239521 4842 generic.go:334] "Generic (PLEG): container finished" podID="726c1772-2536-414e-a6ce-9c1437b021d1" containerID="dc6d91d0986b64e793e6b5ee027d9ab62f264d291e919b8d22ff5580bd033fbe" exitCode=143 Feb 02 07:06:36 crc kubenswrapper[4842]: I0202 07:06:36.239630 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-697d496d6b-bz7zg" event={"ID":"726c1772-2536-414e-a6ce-9c1437b021d1","Type":"ContainerDied","Data":"dc6d91d0986b64e793e6b5ee027d9ab62f264d291e919b8d22ff5580bd033fbe"} Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.772720 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.773665 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="ceilometer-central-agent" containerID="cri-o://0275ebaf83cd1dc6f0f1e530a2520ae303911995fcb24e0ce6bb618355448ca7" gracePeriod=30 Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.773794 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="proxy-httpd" containerID="cri-o://9fd61c4357d65c3104ccc6627ce5c120ccaf3a3a092c30986f1996894ba11d04" gracePeriod=30 Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.773832 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="sg-core" containerID="cri-o://65fe3e72ea38c1f2d2b6b3a6c420618912dad1d016bd4f786028a45d00817ad9" gracePeriod=30 Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.774010 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="ceilometer-notification-agent" containerID="cri-o://80e2b283fa7d6732f1ee502cb45ba016aee0bc6094fa574b3e9b062a5cb23a5c" gracePeriod=30 Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.829318 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-659598d599-lpzh5"] Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.843608 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.847381 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.847590 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.847607 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-659598d599-lpzh5"] Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.847695 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.907674 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-public-tls-certs\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.907735 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-config-data\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.907757 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-combined-ca-bundle\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.907796 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eff2351-b4e8-43cf-a232-9c36cb11c130-log-httpd\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.908060 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-internal-tls-certs\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.908209 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eff2351-b4e8-43cf-a232-9c36cb11c130-run-httpd\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.908315 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9eff2351-b4e8-43cf-a232-9c36cb11c130-etc-swift\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.908408 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqwsc\" (UniqueName: \"kubernetes.io/projected/9eff2351-b4e8-43cf-a232-9c36cb11c130-kube-api-access-pqwsc\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.932051 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.009490 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-combined-ca-bundle\") pod \"726c1772-2536-414e-a6ce-9c1437b021d1\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.009579 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-config-data\") pod \"726c1772-2536-414e-a6ce-9c1437b021d1\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.009698 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-scripts\") pod \"726c1772-2536-414e-a6ce-9c1437b021d1\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.009746 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-internal-tls-certs\") pod \"726c1772-2536-414e-a6ce-9c1437b021d1\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.009784 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-public-tls-certs\") pod \"726c1772-2536-414e-a6ce-9c1437b021d1\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.009861 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/726c1772-2536-414e-a6ce-9c1437b021d1-logs\") pod \"726c1772-2536-414e-a6ce-9c1437b021d1\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.009905 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42brm\" (UniqueName: \"kubernetes.io/projected/726c1772-2536-414e-a6ce-9c1437b021d1-kube-api-access-42brm\") pod \"726c1772-2536-414e-a6ce-9c1437b021d1\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.010160 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eff2351-b4e8-43cf-a232-9c36cb11c130-run-httpd\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.010254 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9eff2351-b4e8-43cf-a232-9c36cb11c130-etc-swift\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.010304 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqwsc\" (UniqueName: \"kubernetes.io/projected/9eff2351-b4e8-43cf-a232-9c36cb11c130-kube-api-access-pqwsc\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.010406 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-public-tls-certs\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.010452 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-config-data\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.010483 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-combined-ca-bundle\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.010521 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eff2351-b4e8-43cf-a232-9c36cb11c130-log-httpd\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.010568 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-internal-tls-certs\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.011963 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eff2351-b4e8-43cf-a232-9c36cb11c130-run-httpd\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.012793 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eff2351-b4e8-43cf-a232-9c36cb11c130-log-httpd\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.016839 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/726c1772-2536-414e-a6ce-9c1437b021d1-logs" (OuterVolumeSpecName: "logs") pod "726c1772-2536-414e-a6ce-9c1437b021d1" (UID: "726c1772-2536-414e-a6ce-9c1437b021d1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.017382 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9eff2351-b4e8-43cf-a232-9c36cb11c130-etc-swift\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.019000 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/726c1772-2536-414e-a6ce-9c1437b021d1-kube-api-access-42brm" (OuterVolumeSpecName: "kube-api-access-42brm") pod "726c1772-2536-414e-a6ce-9c1437b021d1" (UID: "726c1772-2536-414e-a6ce-9c1437b021d1"). InnerVolumeSpecName "kube-api-access-42brm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.019897 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-combined-ca-bundle\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.020513 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-config-data\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.023409 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-scripts" (OuterVolumeSpecName: "scripts") pod "726c1772-2536-414e-a6ce-9c1437b021d1" (UID: "726c1772-2536-414e-a6ce-9c1437b021d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.024069 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-internal-tls-certs\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.025774 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-public-tls-certs\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.032850 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqwsc\" (UniqueName: \"kubernetes.io/projected/9eff2351-b4e8-43cf-a232-9c36cb11c130-kube-api-access-pqwsc\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.079085 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 02 07:06:39 crc kubenswrapper[4842]: E0202 07:06:39.079647 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="726c1772-2536-414e-a6ce-9c1437b021d1" containerName="placement-api" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.079677 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="726c1772-2536-414e-a6ce-9c1437b021d1" containerName="placement-api" Feb 02 07:06:39 crc kubenswrapper[4842]: E0202 07:06:39.079708 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="726c1772-2536-414e-a6ce-9c1437b021d1" containerName="placement-log" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.079716 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="726c1772-2536-414e-a6ce-9c1437b021d1" containerName="placement-log" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.079942 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="726c1772-2536-414e-a6ce-9c1437b021d1" containerName="placement-api" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.079969 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="726c1772-2536-414e-a6ce-9c1437b021d1" containerName="placement-log" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.080582 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.083957 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-rzqpc" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.084980 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.085135 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.088970 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.112103 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/590d1088-e964-43a6-b879-01c8b83d4147-openstack-config\") pod \"openstackclient\" (UID: \"590d1088-e964-43a6-b879-01c8b83d4147\") " pod="openstack/openstackclient" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.112506 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590d1088-e964-43a6-b879-01c8b83d4147-combined-ca-bundle\") pod \"openstackclient\" (UID: \"590d1088-e964-43a6-b879-01c8b83d4147\") " pod="openstack/openstackclient" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.112527 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz5x6\" (UniqueName: \"kubernetes.io/projected/590d1088-e964-43a6-b879-01c8b83d4147-kube-api-access-wz5x6\") pod \"openstackclient\" (UID: \"590d1088-e964-43a6-b879-01c8b83d4147\") " pod="openstack/openstackclient" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.112600 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/590d1088-e964-43a6-b879-01c8b83d4147-openstack-config-secret\") pod \"openstackclient\" (UID: \"590d1088-e964-43a6-b879-01c8b83d4147\") " pod="openstack/openstackclient" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.112656 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.112674 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/726c1772-2536-414e-a6ce-9c1437b021d1-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.112685 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42brm\" (UniqueName: \"kubernetes.io/projected/726c1772-2536-414e-a6ce-9c1437b021d1-kube-api-access-42brm\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.122539 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-config-data" (OuterVolumeSpecName: "config-data") pod "726c1772-2536-414e-a6ce-9c1437b021d1" (UID: "726c1772-2536-414e-a6ce-9c1437b021d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.124393 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "726c1772-2536-414e-a6ce-9c1437b021d1" (UID: "726c1772-2536-414e-a6ce-9c1437b021d1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.124490 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "726c1772-2536-414e-a6ce-9c1437b021d1" (UID: "726c1772-2536-414e-a6ce-9c1437b021d1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.137166 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "726c1772-2536-414e-a6ce-9c1437b021d1" (UID: "726c1772-2536-414e-a6ce-9c1437b021d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.143915 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.167:3000/\": read tcp 10.217.0.2:53090->10.217.0.167:3000: read: connection reset by peer" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.214364 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/590d1088-e964-43a6-b879-01c8b83d4147-openstack-config-secret\") pod \"openstackclient\" (UID: \"590d1088-e964-43a6-b879-01c8b83d4147\") " pod="openstack/openstackclient" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.214459 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/590d1088-e964-43a6-b879-01c8b83d4147-openstack-config\") pod \"openstackclient\" (UID: \"590d1088-e964-43a6-b879-01c8b83d4147\") " pod="openstack/openstackclient" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.214517 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590d1088-e964-43a6-b879-01c8b83d4147-combined-ca-bundle\") pod \"openstackclient\" (UID: \"590d1088-e964-43a6-b879-01c8b83d4147\") " pod="openstack/openstackclient" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.214535 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz5x6\" (UniqueName: \"kubernetes.io/projected/590d1088-e964-43a6-b879-01c8b83d4147-kube-api-access-wz5x6\") pod \"openstackclient\" (UID: \"590d1088-e964-43a6-b879-01c8b83d4147\") " pod="openstack/openstackclient" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.214626 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.214642 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.214651 4842 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.214659 4842 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.215360 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/590d1088-e964-43a6-b879-01c8b83d4147-openstack-config\") pod \"openstackclient\" (UID: \"590d1088-e964-43a6-b879-01c8b83d4147\") " pod="openstack/openstackclient" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.218870 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590d1088-e964-43a6-b879-01c8b83d4147-combined-ca-bundle\") pod \"openstackclient\" (UID: \"590d1088-e964-43a6-b879-01c8b83d4147\") " pod="openstack/openstackclient" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.218877 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/590d1088-e964-43a6-b879-01c8b83d4147-openstack-config-secret\") pod \"openstackclient\" (UID: \"590d1088-e964-43a6-b879-01c8b83d4147\") " pod="openstack/openstackclient" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.228793 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz5x6\" (UniqueName: \"kubernetes.io/projected/590d1088-e964-43a6-b879-01c8b83d4147-kube-api-access-wz5x6\") pod \"openstackclient\" (UID: \"590d1088-e964-43a6-b879-01c8b83d4147\") " pod="openstack/openstackclient" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.255807 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.264299 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.264303 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-697d496d6b-bz7zg" event={"ID":"726c1772-2536-414e-a6ce-9c1437b021d1","Type":"ContainerDied","Data":"82a543d3d9cc00e4f8309fbaaed6e12bd0276e8a75a5a75d05dfd12644dff786"} Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.264464 4842 scope.go:117] "RemoveContainer" containerID="82a543d3d9cc00e4f8309fbaaed6e12bd0276e8a75a5a75d05dfd12644dff786" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.264312 4842 generic.go:334] "Generic (PLEG): container finished" podID="726c1772-2536-414e-a6ce-9c1437b021d1" containerID="82a543d3d9cc00e4f8309fbaaed6e12bd0276e8a75a5a75d05dfd12644dff786" exitCode=0 Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.264548 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-697d496d6b-bz7zg" event={"ID":"726c1772-2536-414e-a6ce-9c1437b021d1","Type":"ContainerDied","Data":"3841fc7dcb9ce569457a802c09c27ff59529bd2560831414d8333da874fb2c77"} Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.292822 4842 generic.go:334] "Generic (PLEG): container finished" podID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerID="9fd61c4357d65c3104ccc6627ce5c120ccaf3a3a092c30986f1996894ba11d04" exitCode=0 Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.292850 4842 generic.go:334] "Generic (PLEG): container finished" podID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerID="65fe3e72ea38c1f2d2b6b3a6c420618912dad1d016bd4f786028a45d00817ad9" exitCode=2 Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.292860 4842 generic.go:334] "Generic (PLEG): container finished" podID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerID="0275ebaf83cd1dc6f0f1e530a2520ae303911995fcb24e0ce6bb618355448ca7" exitCode=0 Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.292876 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0636bdd6-0d17-4f9b-9031-663dfb98f672","Type":"ContainerDied","Data":"9fd61c4357d65c3104ccc6627ce5c120ccaf3a3a092c30986f1996894ba11d04"} Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.292900 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0636bdd6-0d17-4f9b-9031-663dfb98f672","Type":"ContainerDied","Data":"65fe3e72ea38c1f2d2b6b3a6c420618912dad1d016bd4f786028a45d00817ad9"} Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.292909 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0636bdd6-0d17-4f9b-9031-663dfb98f672","Type":"ContainerDied","Data":"0275ebaf83cd1dc6f0f1e530a2520ae303911995fcb24e0ce6bb618355448ca7"} Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.315736 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-697d496d6b-bz7zg"] Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.323007 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-697d496d6b-bz7zg"] Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.419451 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.433075 4842 scope.go:117] "RemoveContainer" containerID="dc6d91d0986b64e793e6b5ee027d9ab62f264d291e919b8d22ff5580bd033fbe" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.452069 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="726c1772-2536-414e-a6ce-9c1437b021d1" path="/var/lib/kubelet/pods/726c1772-2536-414e-a6ce-9c1437b021d1/volumes" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.600158 4842 scope.go:117] "RemoveContainer" containerID="82a543d3d9cc00e4f8309fbaaed6e12bd0276e8a75a5a75d05dfd12644dff786" Feb 02 07:06:39 crc kubenswrapper[4842]: E0202 07:06:39.600978 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82a543d3d9cc00e4f8309fbaaed6e12bd0276e8a75a5a75d05dfd12644dff786\": container with ID starting with 82a543d3d9cc00e4f8309fbaaed6e12bd0276e8a75a5a75d05dfd12644dff786 not found: ID does not exist" containerID="82a543d3d9cc00e4f8309fbaaed6e12bd0276e8a75a5a75d05dfd12644dff786" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.601013 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82a543d3d9cc00e4f8309fbaaed6e12bd0276e8a75a5a75d05dfd12644dff786"} err="failed to get container status \"82a543d3d9cc00e4f8309fbaaed6e12bd0276e8a75a5a75d05dfd12644dff786\": rpc error: code = NotFound desc = could not find container \"82a543d3d9cc00e4f8309fbaaed6e12bd0276e8a75a5a75d05dfd12644dff786\": container with ID starting with 82a543d3d9cc00e4f8309fbaaed6e12bd0276e8a75a5a75d05dfd12644dff786 not found: ID does not exist" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.601038 4842 scope.go:117] "RemoveContainer" containerID="dc6d91d0986b64e793e6b5ee027d9ab62f264d291e919b8d22ff5580bd033fbe" Feb 02 07:06:39 crc kubenswrapper[4842]: E0202 07:06:39.603599 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc6d91d0986b64e793e6b5ee027d9ab62f264d291e919b8d22ff5580bd033fbe\": container with ID starting with dc6d91d0986b64e793e6b5ee027d9ab62f264d291e919b8d22ff5580bd033fbe not found: ID does not exist" containerID="dc6d91d0986b64e793e6b5ee027d9ab62f264d291e919b8d22ff5580bd033fbe" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.603629 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc6d91d0986b64e793e6b5ee027d9ab62f264d291e919b8d22ff5580bd033fbe"} err="failed to get container status \"dc6d91d0986b64e793e6b5ee027d9ab62f264d291e919b8d22ff5580bd033fbe\": rpc error: code = NotFound desc = could not find container \"dc6d91d0986b64e793e6b5ee027d9ab62f264d291e919b8d22ff5580bd033fbe\": container with ID starting with dc6d91d0986b64e793e6b5ee027d9ab62f264d291e919b8d22ff5580bd033fbe not found: ID does not exist" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.806393 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-659598d599-lpzh5"] Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.923491 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.932347 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.032892 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-config\") pod \"3aaab28f-fb61-4600-b66f-a485ca345112\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.032972 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-ovndb-tls-certs\") pod \"3aaab28f-fb61-4600-b66f-a485ca345112\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.033056 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g4w4\" (UniqueName: \"kubernetes.io/projected/3aaab28f-fb61-4600-b66f-a485ca345112-kube-api-access-4g4w4\") pod \"3aaab28f-fb61-4600-b66f-a485ca345112\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.033166 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-httpd-config\") pod \"3aaab28f-fb61-4600-b66f-a485ca345112\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.033260 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-public-tls-certs\") pod \"3aaab28f-fb61-4600-b66f-a485ca345112\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.033297 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-combined-ca-bundle\") pod \"3aaab28f-fb61-4600-b66f-a485ca345112\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.033400 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-internal-tls-certs\") pod \"3aaab28f-fb61-4600-b66f-a485ca345112\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.037465 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aaab28f-fb61-4600-b66f-a485ca345112-kube-api-access-4g4w4" (OuterVolumeSpecName: "kube-api-access-4g4w4") pod "3aaab28f-fb61-4600-b66f-a485ca345112" (UID: "3aaab28f-fb61-4600-b66f-a485ca345112"). InnerVolumeSpecName "kube-api-access-4g4w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.037735 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "3aaab28f-fb61-4600-b66f-a485ca345112" (UID: "3aaab28f-fb61-4600-b66f-a485ca345112"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.102968 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3aaab28f-fb61-4600-b66f-a485ca345112" (UID: "3aaab28f-fb61-4600-b66f-a485ca345112"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.107147 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3aaab28f-fb61-4600-b66f-a485ca345112" (UID: "3aaab28f-fb61-4600-b66f-a485ca345112"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.109281 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-config" (OuterVolumeSpecName: "config") pod "3aaab28f-fb61-4600-b66f-a485ca345112" (UID: "3aaab28f-fb61-4600-b66f-a485ca345112"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.114570 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3aaab28f-fb61-4600-b66f-a485ca345112" (UID: "3aaab28f-fb61-4600-b66f-a485ca345112"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.121521 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "3aaab28f-fb61-4600-b66f-a485ca345112" (UID: "3aaab28f-fb61-4600-b66f-a485ca345112"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.135544 4842 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.135870 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.135880 4842 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.135892 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g4w4\" (UniqueName: \"kubernetes.io/projected/3aaab28f-fb61-4600-b66f-a485ca345112-kube-api-access-4g4w4\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.135901 4842 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.135910 4842 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.135927 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.303710 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"590d1088-e964-43a6-b879-01c8b83d4147","Type":"ContainerStarted","Data":"abd7b9a59e647cb412c034e625a72fdd9b5e8c874ae4e981bda1619d04a7aa91"} Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.305876 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-659598d599-lpzh5" event={"ID":"9eff2351-b4e8-43cf-a232-9c36cb11c130","Type":"ContainerStarted","Data":"49dfdfa99a47811582b530171bcdb672444bf58776e14b517fe66bf3f7abc750"} Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.305901 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-659598d599-lpzh5" event={"ID":"9eff2351-b4e8-43cf-a232-9c36cb11c130","Type":"ContainerStarted","Data":"1e413e67564e718a498ac35eeced53092dbd9372163eaf63c69cfa47632f99ec"} Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.305912 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-659598d599-lpzh5" event={"ID":"9eff2351-b4e8-43cf-a232-9c36cb11c130","Type":"ContainerStarted","Data":"c97160040d0350fa9bd5e1bbc3b5084d4e4f379ea92abc97f8017a5311a0c9cf"} Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.306043 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.306094 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.307996 4842 generic.go:334] "Generic (PLEG): container finished" podID="3aaab28f-fb61-4600-b66f-a485ca345112" containerID="b303529aa7f40b97ddac015c60fbc643d3194166e20eda9000a91d5e375c56d6" exitCode=0 Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.308039 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.308041 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fcc587c45-x7h24" event={"ID":"3aaab28f-fb61-4600-b66f-a485ca345112","Type":"ContainerDied","Data":"b303529aa7f40b97ddac015c60fbc643d3194166e20eda9000a91d5e375c56d6"} Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.308069 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fcc587c45-x7h24" event={"ID":"3aaab28f-fb61-4600-b66f-a485ca345112","Type":"ContainerDied","Data":"6baf18e2465586bae82b31b897e8d4dfb75242a3b157fb93fe3a29ff487cbf1b"} Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.308088 4842 scope.go:117] "RemoveContainer" containerID="ca6552ce5887f06f32bb03e339a3e9124e1fa65f5a80acb32717eb27f56d3775" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.327772 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-659598d599-lpzh5" podStartSLOduration=2.327757175 podStartE2EDuration="2.327757175s" podCreationTimestamp="2026-02-02 07:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:06:40.326497714 +0000 UTC m=+1225.703765626" watchObservedRunningTime="2026-02-02 07:06:40.327757175 +0000 UTC m=+1225.705025087" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.343963 4842 scope.go:117] "RemoveContainer" containerID="b303529aa7f40b97ddac015c60fbc643d3194166e20eda9000a91d5e375c56d6" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.349922 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6fcc587c45-x7h24"] Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.357487 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6fcc587c45-x7h24"] Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.363558 4842 scope.go:117] "RemoveContainer" containerID="ca6552ce5887f06f32bb03e339a3e9124e1fa65f5a80acb32717eb27f56d3775" Feb 02 07:06:40 crc kubenswrapper[4842]: E0202 07:06:40.364053 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca6552ce5887f06f32bb03e339a3e9124e1fa65f5a80acb32717eb27f56d3775\": container with ID starting with ca6552ce5887f06f32bb03e339a3e9124e1fa65f5a80acb32717eb27f56d3775 not found: ID does not exist" containerID="ca6552ce5887f06f32bb03e339a3e9124e1fa65f5a80acb32717eb27f56d3775" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.364086 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca6552ce5887f06f32bb03e339a3e9124e1fa65f5a80acb32717eb27f56d3775"} err="failed to get container status \"ca6552ce5887f06f32bb03e339a3e9124e1fa65f5a80acb32717eb27f56d3775\": rpc error: code = NotFound desc = could not find container \"ca6552ce5887f06f32bb03e339a3e9124e1fa65f5a80acb32717eb27f56d3775\": container with ID starting with ca6552ce5887f06f32bb03e339a3e9124e1fa65f5a80acb32717eb27f56d3775 not found: ID does not exist" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.364111 4842 scope.go:117] "RemoveContainer" containerID="b303529aa7f40b97ddac015c60fbc643d3194166e20eda9000a91d5e375c56d6" Feb 02 07:06:40 crc kubenswrapper[4842]: E0202 07:06:40.364718 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b303529aa7f40b97ddac015c60fbc643d3194166e20eda9000a91d5e375c56d6\": container with ID starting with b303529aa7f40b97ddac015c60fbc643d3194166e20eda9000a91d5e375c56d6 not found: ID does not exist" containerID="b303529aa7f40b97ddac015c60fbc643d3194166e20eda9000a91d5e375c56d6" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.364756 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b303529aa7f40b97ddac015c60fbc643d3194166e20eda9000a91d5e375c56d6"} err="failed to get container status \"b303529aa7f40b97ddac015c60fbc643d3194166e20eda9000a91d5e375c56d6\": rpc error: code = NotFound desc = could not find container \"b303529aa7f40b97ddac015c60fbc643d3194166e20eda9000a91d5e375c56d6\": container with ID starting with b303529aa7f40b97ddac015c60fbc643d3194166e20eda9000a91d5e375c56d6 not found: ID does not exist" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.780729 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 02 07:06:41 crc kubenswrapper[4842]: I0202 07:06:41.476947 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aaab28f-fb61-4600-b66f-a485ca345112" path="/var/lib/kubelet/pods/3aaab28f-fb61-4600-b66f-a485ca345112/volumes" Feb 02 07:06:42 crc kubenswrapper[4842]: I0202 07:06:42.146338 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:06:42 crc kubenswrapper[4842]: I0202 07:06:42.146616 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:06:44 crc kubenswrapper[4842]: I0202 07:06:44.362538 4842 generic.go:334] "Generic (PLEG): container finished" podID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerID="80e2b283fa7d6732f1ee502cb45ba016aee0bc6094fa574b3e9b062a5cb23a5c" exitCode=0 Feb 02 07:06:44 crc kubenswrapper[4842]: I0202 07:06:44.362747 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0636bdd6-0d17-4f9b-9031-663dfb98f672","Type":"ContainerDied","Data":"80e2b283fa7d6732f1ee502cb45ba016aee0bc6094fa574b3e9b062a5cb23a5c"} Feb 02 07:06:45 crc kubenswrapper[4842]: I0202 07:06:45.497929 4842 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podd9f1c72e-953b-45ba-ba69-c7574f82e8ad"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podd9f1c72e-953b-45ba-ba69-c7574f82e8ad] : Timed out while waiting for systemd to remove kubepods-besteffort-podd9f1c72e_953b_45ba_ba69_c7574f82e8ad.slice" Feb 02 07:06:45 crc kubenswrapper[4842]: E0202 07:06:45.497999 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podd9f1c72e-953b-45ba-ba69-c7574f82e8ad] : unable to destroy cgroup paths for cgroup [kubepods besteffort podd9f1c72e-953b-45ba-ba69-c7574f82e8ad] : Timed out while waiting for systemd to remove kubepods-besteffort-podd9f1c72e_953b_45ba_ba69_c7574f82e8ad.slice" pod="openstack/cinder-db-sync-phj68" podUID="d9f1c72e-953b-45ba-ba69-c7574f82e8ad" Feb 02 07:06:46 crc kubenswrapper[4842]: I0202 07:06:46.386317 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-phj68" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.267714 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.275825 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.475245 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.476807 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0636bdd6-0d17-4f9b-9031-663dfb98f672","Type":"ContainerDied","Data":"2332347c0d70878870bc3cca3315995176808c8257ccc12723509cbb8433193f"} Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.476898 4842 scope.go:117] "RemoveContainer" containerID="9fd61c4357d65c3104ccc6627ce5c120ccaf3a3a092c30986f1996894ba11d04" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.506026 4842 scope.go:117] "RemoveContainer" containerID="65fe3e72ea38c1f2d2b6b3a6c420618912dad1d016bd4f786028a45d00817ad9" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.523915 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0636bdd6-0d17-4f9b-9031-663dfb98f672-log-httpd\") pod \"0636bdd6-0d17-4f9b-9031-663dfb98f672\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.523981 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf6fm\" (UniqueName: \"kubernetes.io/projected/0636bdd6-0d17-4f9b-9031-663dfb98f672-kube-api-access-hf6fm\") pod \"0636bdd6-0d17-4f9b-9031-663dfb98f672\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.523997 4842 scope.go:117] "RemoveContainer" containerID="80e2b283fa7d6732f1ee502cb45ba016aee0bc6094fa574b3e9b062a5cb23a5c" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.524098 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0636bdd6-0d17-4f9b-9031-663dfb98f672-run-httpd\") pod \"0636bdd6-0d17-4f9b-9031-663dfb98f672\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.524128 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-config-data\") pod \"0636bdd6-0d17-4f9b-9031-663dfb98f672\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.524189 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-scripts\") pod \"0636bdd6-0d17-4f9b-9031-663dfb98f672\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.524253 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-sg-core-conf-yaml\") pod \"0636bdd6-0d17-4f9b-9031-663dfb98f672\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.524276 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-combined-ca-bundle\") pod \"0636bdd6-0d17-4f9b-9031-663dfb98f672\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.524574 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0636bdd6-0d17-4f9b-9031-663dfb98f672-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0636bdd6-0d17-4f9b-9031-663dfb98f672" (UID: "0636bdd6-0d17-4f9b-9031-663dfb98f672"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.525016 4842 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0636bdd6-0d17-4f9b-9031-663dfb98f672-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.525263 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0636bdd6-0d17-4f9b-9031-663dfb98f672-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0636bdd6-0d17-4f9b-9031-663dfb98f672" (UID: "0636bdd6-0d17-4f9b-9031-663dfb98f672"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.530159 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-scripts" (OuterVolumeSpecName: "scripts") pod "0636bdd6-0d17-4f9b-9031-663dfb98f672" (UID: "0636bdd6-0d17-4f9b-9031-663dfb98f672"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.534119 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0636bdd6-0d17-4f9b-9031-663dfb98f672-kube-api-access-hf6fm" (OuterVolumeSpecName: "kube-api-access-hf6fm") pod "0636bdd6-0d17-4f9b-9031-663dfb98f672" (UID: "0636bdd6-0d17-4f9b-9031-663dfb98f672"). InnerVolumeSpecName "kube-api-access-hf6fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.550962 4842 scope.go:117] "RemoveContainer" containerID="0275ebaf83cd1dc6f0f1e530a2520ae303911995fcb24e0ce6bb618355448ca7" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.565710 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0636bdd6-0d17-4f9b-9031-663dfb98f672" (UID: "0636bdd6-0d17-4f9b-9031-663dfb98f672"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.596113 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0636bdd6-0d17-4f9b-9031-663dfb98f672" (UID: "0636bdd6-0d17-4f9b-9031-663dfb98f672"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.624191 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-config-data" (OuterVolumeSpecName: "config-data") pod "0636bdd6-0d17-4f9b-9031-663dfb98f672" (UID: "0636bdd6-0d17-4f9b-9031-663dfb98f672"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.626404 4842 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0636bdd6-0d17-4f9b-9031-663dfb98f672-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.626431 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.626441 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.626449 4842 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.626459 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.626468 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf6fm\" (UniqueName: \"kubernetes.io/projected/0636bdd6-0d17-4f9b-9031-663dfb98f672-kube-api-access-hf6fm\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.446912 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"590d1088-e964-43a6-b879-01c8b83d4147","Type":"ContainerStarted","Data":"7321f950b4c167a7b34d5c400d350da10c11bc84a859361985534a57f9758316"} Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.447888 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.467897 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.123709148 podStartE2EDuration="11.467882114s" podCreationTimestamp="2026-02-02 07:06:39 +0000 UTC" firstStartedPulling="2026-02-02 07:06:39.958638122 +0000 UTC m=+1225.335906034" lastFinishedPulling="2026-02-02 07:06:49.302811088 +0000 UTC m=+1234.680079000" observedRunningTime="2026-02-02 07:06:50.465886664 +0000 UTC m=+1235.843154596" watchObservedRunningTime="2026-02-02 07:06:50.467882114 +0000 UTC m=+1235.845150026" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.485380 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.495296 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.507117 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:50 crc kubenswrapper[4842]: E0202 07:06:50.507461 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aaab28f-fb61-4600-b66f-a485ca345112" containerName="neutron-api" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.507475 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aaab28f-fb61-4600-b66f-a485ca345112" containerName="neutron-api" Feb 02 07:06:50 crc kubenswrapper[4842]: E0202 07:06:50.507486 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="ceilometer-central-agent" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.507491 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="ceilometer-central-agent" Feb 02 07:06:50 crc kubenswrapper[4842]: E0202 07:06:50.507503 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="proxy-httpd" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.507509 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="proxy-httpd" Feb 02 07:06:50 crc kubenswrapper[4842]: E0202 07:06:50.507524 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="ceilometer-notification-agent" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.507531 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="ceilometer-notification-agent" Feb 02 07:06:50 crc kubenswrapper[4842]: E0202 07:06:50.507548 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aaab28f-fb61-4600-b66f-a485ca345112" containerName="neutron-httpd" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.507553 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aaab28f-fb61-4600-b66f-a485ca345112" containerName="neutron-httpd" Feb 02 07:06:50 crc kubenswrapper[4842]: E0202 07:06:50.507564 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="sg-core" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.507569 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="sg-core" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.507729 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aaab28f-fb61-4600-b66f-a485ca345112" containerName="neutron-api" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.507742 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aaab28f-fb61-4600-b66f-a485ca345112" containerName="neutron-httpd" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.507759 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="proxy-httpd" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.507768 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="ceilometer-notification-agent" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.507782 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="ceilometer-central-agent" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.507791 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="sg-core" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.510478 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.515900 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.518274 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.520137 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.652830 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.652934 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef57521a-a9fc-42b0-b641-1258e3bfdf34-log-httpd\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.653000 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-config-data\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.653054 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sghhj\" (UniqueName: \"kubernetes.io/projected/ef57521a-a9fc-42b0-b641-1258e3bfdf34-kube-api-access-sghhj\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.653625 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef57521a-a9fc-42b0-b641-1258e3bfdf34-run-httpd\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.653691 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-scripts\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.653804 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.699488 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:50 crc kubenswrapper[4842]: E0202 07:06:50.700073 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-sghhj log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="ef57521a-a9fc-42b0-b641-1258e3bfdf34" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.755195 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sghhj\" (UniqueName: \"kubernetes.io/projected/ef57521a-a9fc-42b0-b641-1258e3bfdf34-kube-api-access-sghhj\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.755248 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef57521a-a9fc-42b0-b641-1258e3bfdf34-run-httpd\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.755275 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-scripts\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.755318 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.755365 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.755404 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef57521a-a9fc-42b0-b641-1258e3bfdf34-log-httpd\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.755454 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-config-data\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.755777 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef57521a-a9fc-42b0-b641-1258e3bfdf34-run-httpd\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.755884 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef57521a-a9fc-42b0-b641-1258e3bfdf34-log-httpd\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.760838 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-scripts\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.762872 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-config-data\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.767932 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.770739 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.782700 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sghhj\" (UniqueName: \"kubernetes.io/projected/ef57521a-a9fc-42b0-b641-1258e3bfdf34-kube-api-access-sghhj\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.445083 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" path="/var/lib/kubelet/pods/0636bdd6-0d17-4f9b-9031-663dfb98f672/volumes" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.456380 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.470180 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.669334 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-config-data\") pod \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.669627 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-combined-ca-bundle\") pod \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.669764 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sghhj\" (UniqueName: \"kubernetes.io/projected/ef57521a-a9fc-42b0-b641-1258e3bfdf34-kube-api-access-sghhj\") pod \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.669884 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-scripts\") pod \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.670074 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef57521a-a9fc-42b0-b641-1258e3bfdf34-run-httpd\") pod \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.670247 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-sg-core-conf-yaml\") pod \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.670364 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef57521a-a9fc-42b0-b641-1258e3bfdf34-log-httpd\") pod \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.670459 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef57521a-a9fc-42b0-b641-1258e3bfdf34-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ef57521a-a9fc-42b0-b641-1258e3bfdf34" (UID: "ef57521a-a9fc-42b0-b641-1258e3bfdf34"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.670854 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef57521a-a9fc-42b0-b641-1258e3bfdf34-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ef57521a-a9fc-42b0-b641-1258e3bfdf34" (UID: "ef57521a-a9fc-42b0-b641-1258e3bfdf34"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.670942 4842 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef57521a-a9fc-42b0-b641-1258e3bfdf34-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.674096 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ef57521a-a9fc-42b0-b641-1258e3bfdf34" (UID: "ef57521a-a9fc-42b0-b641-1258e3bfdf34"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.675861 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef57521a-a9fc-42b0-b641-1258e3bfdf34-kube-api-access-sghhj" (OuterVolumeSpecName: "kube-api-access-sghhj") pod "ef57521a-a9fc-42b0-b641-1258e3bfdf34" (UID: "ef57521a-a9fc-42b0-b641-1258e3bfdf34"). InnerVolumeSpecName "kube-api-access-sghhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.676183 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-scripts" (OuterVolumeSpecName: "scripts") pod "ef57521a-a9fc-42b0-b641-1258e3bfdf34" (UID: "ef57521a-a9fc-42b0-b641-1258e3bfdf34"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.677437 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef57521a-a9fc-42b0-b641-1258e3bfdf34" (UID: "ef57521a-a9fc-42b0-b641-1258e3bfdf34"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.679362 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-config-data" (OuterVolumeSpecName: "config-data") pod "ef57521a-a9fc-42b0-b641-1258e3bfdf34" (UID: "ef57521a-a9fc-42b0-b641-1258e3bfdf34"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.771892 4842 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef57521a-a9fc-42b0-b641-1258e3bfdf34-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.771935 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.771945 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.771958 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sghhj\" (UniqueName: \"kubernetes.io/projected/ef57521a-a9fc-42b0-b641-1258e3bfdf34-kube-api-access-sghhj\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.771966 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.771975 4842 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.464815 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.554921 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.571654 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.581833 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.584753 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.586600 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.588455 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/804c0232-0b21-4b4a-973e-620fef26b1de-run-httpd\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.588490 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.588572 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-config-data\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.588606 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.588652 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp7mr\" (UniqueName: \"kubernetes.io/projected/804c0232-0b21-4b4a-973e-620fef26b1de-kube-api-access-dp7mr\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.588684 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-scripts\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.588720 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/804c0232-0b21-4b4a-973e-620fef26b1de-log-httpd\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.588818 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.597687 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.690102 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/804c0232-0b21-4b4a-973e-620fef26b1de-run-httpd\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.690141 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.690230 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-config-data\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.690278 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.690304 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp7mr\" (UniqueName: \"kubernetes.io/projected/804c0232-0b21-4b4a-973e-620fef26b1de-kube-api-access-dp7mr\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.690339 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-scripts\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.690374 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/804c0232-0b21-4b4a-973e-620fef26b1de-log-httpd\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.690648 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/804c0232-0b21-4b4a-973e-620fef26b1de-run-httpd\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.690913 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/804c0232-0b21-4b4a-973e-620fef26b1de-log-httpd\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.697384 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.697579 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-config-data\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.697941 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.707062 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-scripts\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.712118 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp7mr\" (UniqueName: \"kubernetes.io/projected/804c0232-0b21-4b4a-973e-620fef26b1de-kube-api-access-dp7mr\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.911142 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:06:53 crc kubenswrapper[4842]: I0202 07:06:53.347580 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:53 crc kubenswrapper[4842]: I0202 07:06:53.443651 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef57521a-a9fc-42b0-b641-1258e3bfdf34" path="/var/lib/kubelet/pods/ef57521a-a9fc-42b0-b641-1258e3bfdf34/volumes" Feb 02 07:06:53 crc kubenswrapper[4842]: I0202 07:06:53.476453 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"804c0232-0b21-4b4a-973e-620fef26b1de","Type":"ContainerStarted","Data":"610ef45c658d7af4f1bfccb5ab1bcf0f7f84312f0fd214a19b9a637d039efaf5"} Feb 02 07:06:53 crc kubenswrapper[4842]: I0202 07:06:53.905615 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:53 crc kubenswrapper[4842]: I0202 07:06:53.969323 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7b469b995b-npwfd"] Feb 02 07:06:53 crc kubenswrapper[4842]: I0202 07:06:53.969544 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7b469b995b-npwfd" podUID="a18aba57-b830-47d3-9b18-8946414fdd1d" containerName="neutron-api" containerID="cri-o://6747e535436e2bdd0c46d5273f8b5a7d29b3c3f7226e94896a48a5bfcdb6a2d9" gracePeriod=30 Feb 02 07:06:53 crc kubenswrapper[4842]: I0202 07:06:53.969823 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7b469b995b-npwfd" podUID="a18aba57-b830-47d3-9b18-8946414fdd1d" containerName="neutron-httpd" containerID="cri-o://f8f9e0a8b64ae08b996a6ff20de4cb61c2fe7c362caaa42c329de676a9077b38" gracePeriod=30 Feb 02 07:06:54 crc kubenswrapper[4842]: I0202 07:06:54.377046 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:54 crc kubenswrapper[4842]: I0202 07:06:54.495835 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"804c0232-0b21-4b4a-973e-620fef26b1de","Type":"ContainerStarted","Data":"3a5cb3f49b99abe6192e05d777a57a2ec064de70a666aa2c8b933349f5030599"} Feb 02 07:06:54 crc kubenswrapper[4842]: I0202 07:06:54.498106 4842 generic.go:334] "Generic (PLEG): container finished" podID="a18aba57-b830-47d3-9b18-8946414fdd1d" containerID="f8f9e0a8b64ae08b996a6ff20de4cb61c2fe7c362caaa42c329de676a9077b38" exitCode=0 Feb 02 07:06:54 crc kubenswrapper[4842]: I0202 07:06:54.498145 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b469b995b-npwfd" event={"ID":"a18aba57-b830-47d3-9b18-8946414fdd1d","Type":"ContainerDied","Data":"f8f9e0a8b64ae08b996a6ff20de4cb61c2fe7c362caaa42c329de676a9077b38"} Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.533071 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"804c0232-0b21-4b4a-973e-620fef26b1de","Type":"ContainerStarted","Data":"36b2b05bbe375b399c98b67e29fc0579c7a94211ddd64f7ddba9592374c382bd"} Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.579894 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-dg9pd"] Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.580903 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dg9pd" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.595918 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dg9pd"] Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.656731 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b414999-f3d0-4101-abe7-ed8c7747ce5f-operator-scripts\") pod \"nova-api-db-create-dg9pd\" (UID: \"4b414999-f3d0-4101-abe7-ed8c7747ce5f\") " pod="openstack/nova-api-db-create-dg9pd" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.657444 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5k4k\" (UniqueName: \"kubernetes.io/projected/4b414999-f3d0-4101-abe7-ed8c7747ce5f-kube-api-access-t5k4k\") pod \"nova-api-db-create-dg9pd\" (UID: \"4b414999-f3d0-4101-abe7-ed8c7747ce5f\") " pod="openstack/nova-api-db-create-dg9pd" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.690397 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-89ff-account-create-update-pb4bw"] Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.691534 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-89ff-account-create-update-pb4bw" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.693810 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.698561 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-jph4l"] Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.699489 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jph4l" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.708642 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-89ff-account-create-update-pb4bw"] Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.726339 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-jph4l"] Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.759746 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52bba199-2794-4828-9a54-e1aac49fb223-operator-scripts\") pod \"nova-api-89ff-account-create-update-pb4bw\" (UID: \"52bba199-2794-4828-9a54-e1aac49fb223\") " pod="openstack/nova-api-89ff-account-create-update-pb4bw" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.759794 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b414999-f3d0-4101-abe7-ed8c7747ce5f-operator-scripts\") pod \"nova-api-db-create-dg9pd\" (UID: \"4b414999-f3d0-4101-abe7-ed8c7747ce5f\") " pod="openstack/nova-api-db-create-dg9pd" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.759818 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq9fj\" (UniqueName: \"kubernetes.io/projected/52bba199-2794-4828-9a54-e1aac49fb223-kube-api-access-mq9fj\") pod \"nova-api-89ff-account-create-update-pb4bw\" (UID: \"52bba199-2794-4828-9a54-e1aac49fb223\") " pod="openstack/nova-api-89ff-account-create-update-pb4bw" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.759862 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpvzr\" (UniqueName: \"kubernetes.io/projected/2d8715fd-8755-4bd6-82a7-bf49d61e1779-kube-api-access-zpvzr\") pod \"nova-cell0-db-create-jph4l\" (UID: \"2d8715fd-8755-4bd6-82a7-bf49d61e1779\") " pod="openstack/nova-cell0-db-create-jph4l" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.759900 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5k4k\" (UniqueName: \"kubernetes.io/projected/4b414999-f3d0-4101-abe7-ed8c7747ce5f-kube-api-access-t5k4k\") pod \"nova-api-db-create-dg9pd\" (UID: \"4b414999-f3d0-4101-abe7-ed8c7747ce5f\") " pod="openstack/nova-api-db-create-dg9pd" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.759923 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d8715fd-8755-4bd6-82a7-bf49d61e1779-operator-scripts\") pod \"nova-cell0-db-create-jph4l\" (UID: \"2d8715fd-8755-4bd6-82a7-bf49d61e1779\") " pod="openstack/nova-cell0-db-create-jph4l" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.760647 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b414999-f3d0-4101-abe7-ed8c7747ce5f-operator-scripts\") pod \"nova-api-db-create-dg9pd\" (UID: \"4b414999-f3d0-4101-abe7-ed8c7747ce5f\") " pod="openstack/nova-api-db-create-dg9pd" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.786122 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5k4k\" (UniqueName: \"kubernetes.io/projected/4b414999-f3d0-4101-abe7-ed8c7747ce5f-kube-api-access-t5k4k\") pod \"nova-api-db-create-dg9pd\" (UID: \"4b414999-f3d0-4101-abe7-ed8c7747ce5f\") " pod="openstack/nova-api-db-create-dg9pd" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.864669 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52bba199-2794-4828-9a54-e1aac49fb223-operator-scripts\") pod \"nova-api-89ff-account-create-update-pb4bw\" (UID: \"52bba199-2794-4828-9a54-e1aac49fb223\") " pod="openstack/nova-api-89ff-account-create-update-pb4bw" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.864719 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq9fj\" (UniqueName: \"kubernetes.io/projected/52bba199-2794-4828-9a54-e1aac49fb223-kube-api-access-mq9fj\") pod \"nova-api-89ff-account-create-update-pb4bw\" (UID: \"52bba199-2794-4828-9a54-e1aac49fb223\") " pod="openstack/nova-api-89ff-account-create-update-pb4bw" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.864780 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpvzr\" (UniqueName: \"kubernetes.io/projected/2d8715fd-8755-4bd6-82a7-bf49d61e1779-kube-api-access-zpvzr\") pod \"nova-cell0-db-create-jph4l\" (UID: \"2d8715fd-8755-4bd6-82a7-bf49d61e1779\") " pod="openstack/nova-cell0-db-create-jph4l" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.864828 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d8715fd-8755-4bd6-82a7-bf49d61e1779-operator-scripts\") pod \"nova-cell0-db-create-jph4l\" (UID: \"2d8715fd-8755-4bd6-82a7-bf49d61e1779\") " pod="openstack/nova-cell0-db-create-jph4l" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.865552 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d8715fd-8755-4bd6-82a7-bf49d61e1779-operator-scripts\") pod \"nova-cell0-db-create-jph4l\" (UID: \"2d8715fd-8755-4bd6-82a7-bf49d61e1779\") " pod="openstack/nova-cell0-db-create-jph4l" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.865555 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52bba199-2794-4828-9a54-e1aac49fb223-operator-scripts\") pod \"nova-api-89ff-account-create-update-pb4bw\" (UID: \"52bba199-2794-4828-9a54-e1aac49fb223\") " pod="openstack/nova-api-89ff-account-create-update-pb4bw" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.882662 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-79v8r"] Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.885312 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq9fj\" (UniqueName: \"kubernetes.io/projected/52bba199-2794-4828-9a54-e1aac49fb223-kube-api-access-mq9fj\") pod \"nova-api-89ff-account-create-update-pb4bw\" (UID: \"52bba199-2794-4828-9a54-e1aac49fb223\") " pod="openstack/nova-api-89ff-account-create-update-pb4bw" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.885461 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-79v8r" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.896716 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpvzr\" (UniqueName: \"kubernetes.io/projected/2d8715fd-8755-4bd6-82a7-bf49d61e1779-kube-api-access-zpvzr\") pod \"nova-cell0-db-create-jph4l\" (UID: \"2d8715fd-8755-4bd6-82a7-bf49d61e1779\") " pod="openstack/nova-cell0-db-create-jph4l" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.902833 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-7f00-account-create-update-llc96"] Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.903951 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7f00-account-create-update-llc96" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.904466 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dg9pd" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.908619 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.938608 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-79v8r"] Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.964275 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7f00-account-create-update-llc96"] Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.966435 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/939ed5f9-679d-44c4-8282-d1404d98b420-operator-scripts\") pod \"nova-cell1-db-create-79v8r\" (UID: \"939ed5f9-679d-44c4-8282-d1404d98b420\") " pod="openstack/nova-cell1-db-create-79v8r" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.966490 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28qpr\" (UniqueName: \"kubernetes.io/projected/668f221e-e491-4ec6-9f40-82dd1afc3ac8-kube-api-access-28qpr\") pod \"nova-cell0-7f00-account-create-update-llc96\" (UID: \"668f221e-e491-4ec6-9f40-82dd1afc3ac8\") " pod="openstack/nova-cell0-7f00-account-create-update-llc96" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.966528 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4x9m\" (UniqueName: \"kubernetes.io/projected/939ed5f9-679d-44c4-8282-d1404d98b420-kube-api-access-m4x9m\") pod \"nova-cell1-db-create-79v8r\" (UID: \"939ed5f9-679d-44c4-8282-d1404d98b420\") " pod="openstack/nova-cell1-db-create-79v8r" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.966590 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/668f221e-e491-4ec6-9f40-82dd1afc3ac8-operator-scripts\") pod \"nova-cell0-7f00-account-create-update-llc96\" (UID: \"668f221e-e491-4ec6-9f40-82dd1afc3ac8\") " pod="openstack/nova-cell0-7f00-account-create-update-llc96" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.054856 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-89ff-account-create-update-pb4bw" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.060650 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jph4l" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.068838 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/668f221e-e491-4ec6-9f40-82dd1afc3ac8-operator-scripts\") pod \"nova-cell0-7f00-account-create-update-llc96\" (UID: \"668f221e-e491-4ec6-9f40-82dd1afc3ac8\") " pod="openstack/nova-cell0-7f00-account-create-update-llc96" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.068945 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/939ed5f9-679d-44c4-8282-d1404d98b420-operator-scripts\") pod \"nova-cell1-db-create-79v8r\" (UID: \"939ed5f9-679d-44c4-8282-d1404d98b420\") " pod="openstack/nova-cell1-db-create-79v8r" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.068974 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28qpr\" (UniqueName: \"kubernetes.io/projected/668f221e-e491-4ec6-9f40-82dd1afc3ac8-kube-api-access-28qpr\") pod \"nova-cell0-7f00-account-create-update-llc96\" (UID: \"668f221e-e491-4ec6-9f40-82dd1afc3ac8\") " pod="openstack/nova-cell0-7f00-account-create-update-llc96" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.069005 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4x9m\" (UniqueName: \"kubernetes.io/projected/939ed5f9-679d-44c4-8282-d1404d98b420-kube-api-access-m4x9m\") pod \"nova-cell1-db-create-79v8r\" (UID: \"939ed5f9-679d-44c4-8282-d1404d98b420\") " pod="openstack/nova-cell1-db-create-79v8r" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.070571 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/939ed5f9-679d-44c4-8282-d1404d98b420-operator-scripts\") pod \"nova-cell1-db-create-79v8r\" (UID: \"939ed5f9-679d-44c4-8282-d1404d98b420\") " pod="openstack/nova-cell1-db-create-79v8r" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.072995 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/668f221e-e491-4ec6-9f40-82dd1afc3ac8-operator-scripts\") pod \"nova-cell0-7f00-account-create-update-llc96\" (UID: \"668f221e-e491-4ec6-9f40-82dd1afc3ac8\") " pod="openstack/nova-cell0-7f00-account-create-update-llc96" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.096281 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-17c9-account-create-update-hm58m"] Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.097605 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-17c9-account-create-update-hm58m" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.101791 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28qpr\" (UniqueName: \"kubernetes.io/projected/668f221e-e491-4ec6-9f40-82dd1afc3ac8-kube-api-access-28qpr\") pod \"nova-cell0-7f00-account-create-update-llc96\" (UID: \"668f221e-e491-4ec6-9f40-82dd1afc3ac8\") " pod="openstack/nova-cell0-7f00-account-create-update-llc96" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.103380 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.109819 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4x9m\" (UniqueName: \"kubernetes.io/projected/939ed5f9-679d-44c4-8282-d1404d98b420-kube-api-access-m4x9m\") pod \"nova-cell1-db-create-79v8r\" (UID: \"939ed5f9-679d-44c4-8282-d1404d98b420\") " pod="openstack/nova-cell1-db-create-79v8r" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.130385 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-17c9-account-create-update-hm58m"] Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.171332 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9d15d01-9c12-4b4f-9cec-037a1d21fab1-operator-scripts\") pod \"nova-cell1-17c9-account-create-update-hm58m\" (UID: \"a9d15d01-9c12-4b4f-9cec-037a1d21fab1\") " pod="openstack/nova-cell1-17c9-account-create-update-hm58m" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.171469 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k8mp\" (UniqueName: \"kubernetes.io/projected/a9d15d01-9c12-4b4f-9cec-037a1d21fab1-kube-api-access-6k8mp\") pod \"nova-cell1-17c9-account-create-update-hm58m\" (UID: \"a9d15d01-9c12-4b4f-9cec-037a1d21fab1\") " pod="openstack/nova-cell1-17c9-account-create-update-hm58m" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.272767 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k8mp\" (UniqueName: \"kubernetes.io/projected/a9d15d01-9c12-4b4f-9cec-037a1d21fab1-kube-api-access-6k8mp\") pod \"nova-cell1-17c9-account-create-update-hm58m\" (UID: \"a9d15d01-9c12-4b4f-9cec-037a1d21fab1\") " pod="openstack/nova-cell1-17c9-account-create-update-hm58m" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.272839 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9d15d01-9c12-4b4f-9cec-037a1d21fab1-operator-scripts\") pod \"nova-cell1-17c9-account-create-update-hm58m\" (UID: \"a9d15d01-9c12-4b4f-9cec-037a1d21fab1\") " pod="openstack/nova-cell1-17c9-account-create-update-hm58m" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.274956 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9d15d01-9c12-4b4f-9cec-037a1d21fab1-operator-scripts\") pod \"nova-cell1-17c9-account-create-update-hm58m\" (UID: \"a9d15d01-9c12-4b4f-9cec-037a1d21fab1\") " pod="openstack/nova-cell1-17c9-account-create-update-hm58m" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.293021 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k8mp\" (UniqueName: \"kubernetes.io/projected/a9d15d01-9c12-4b4f-9cec-037a1d21fab1-kube-api-access-6k8mp\") pod \"nova-cell1-17c9-account-create-update-hm58m\" (UID: \"a9d15d01-9c12-4b4f-9cec-037a1d21fab1\") " pod="openstack/nova-cell1-17c9-account-create-update-hm58m" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.310693 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-79v8r" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.319137 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7f00-account-create-update-llc96" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.421923 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dg9pd"] Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.557112 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dg9pd" event={"ID":"4b414999-f3d0-4101-abe7-ed8c7747ce5f","Type":"ContainerStarted","Data":"94ef265414e26a0b5006140a913a8d7ff6850122bee0165ff7e8ae90e61983f0"} Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.568494 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"804c0232-0b21-4b4a-973e-620fef26b1de","Type":"ContainerStarted","Data":"022aa50ba41d0a413d49d7816b95c9ce705b40b44d3e4b26928051ada603decd"} Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.571379 4842 generic.go:334] "Generic (PLEG): container finished" podID="a18aba57-b830-47d3-9b18-8946414fdd1d" containerID="6747e535436e2bdd0c46d5273f8b5a7d29b3c3f7226e94896a48a5bfcdb6a2d9" exitCode=0 Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.571407 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b469b995b-npwfd" event={"ID":"a18aba57-b830-47d3-9b18-8946414fdd1d","Type":"ContainerDied","Data":"6747e535436e2bdd0c46d5273f8b5a7d29b3c3f7226e94896a48a5bfcdb6a2d9"} Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.586060 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-17c9-account-create-update-hm58m" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.632052 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-89ff-account-create-update-pb4bw"] Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.726734 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.795336 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-httpd-config\") pod \"a18aba57-b830-47d3-9b18-8946414fdd1d\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.795404 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-config\") pod \"a18aba57-b830-47d3-9b18-8946414fdd1d\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.795468 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-ovndb-tls-certs\") pod \"a18aba57-b830-47d3-9b18-8946414fdd1d\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.795493 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-combined-ca-bundle\") pod \"a18aba57-b830-47d3-9b18-8946414fdd1d\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.795626 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b482\" (UniqueName: \"kubernetes.io/projected/a18aba57-b830-47d3-9b18-8946414fdd1d-kube-api-access-2b482\") pod \"a18aba57-b830-47d3-9b18-8946414fdd1d\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.803400 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "a18aba57-b830-47d3-9b18-8946414fdd1d" (UID: "a18aba57-b830-47d3-9b18-8946414fdd1d"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.817078 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-jph4l"] Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.819583 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a18aba57-b830-47d3-9b18-8946414fdd1d-kube-api-access-2b482" (OuterVolumeSpecName: "kube-api-access-2b482") pod "a18aba57-b830-47d3-9b18-8946414fdd1d" (UID: "a18aba57-b830-47d3-9b18-8946414fdd1d"). InnerVolumeSpecName "kube-api-access-2b482". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:56 crc kubenswrapper[4842]: W0202 07:06:56.820953 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d8715fd_8755_4bd6_82a7_bf49d61e1779.slice/crio-0fde92f3b8f0ad9269fdb9699eb52b5f22ca179532eaf6391fcded5cb29f2ba6 WatchSource:0}: Error finding container 0fde92f3b8f0ad9269fdb9699eb52b5f22ca179532eaf6391fcded5cb29f2ba6: Status 404 returned error can't find the container with id 0fde92f3b8f0ad9269fdb9699eb52b5f22ca179532eaf6391fcded5cb29f2ba6 Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.899447 4842 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.899481 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2b482\" (UniqueName: \"kubernetes.io/projected/a18aba57-b830-47d3-9b18-8946414fdd1d-kube-api-access-2b482\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.932390 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a18aba57-b830-47d3-9b18-8946414fdd1d" (UID: "a18aba57-b830-47d3-9b18-8946414fdd1d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.935438 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "a18aba57-b830-47d3-9b18-8946414fdd1d" (UID: "a18aba57-b830-47d3-9b18-8946414fdd1d"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.958116 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-config" (OuterVolumeSpecName: "config") pod "a18aba57-b830-47d3-9b18-8946414fdd1d" (UID: "a18aba57-b830-47d3-9b18-8946414fdd1d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.969035 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-79v8r"] Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.985811 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7f00-account-create-update-llc96"] Feb 02 07:06:56 crc kubenswrapper[4842]: W0202 07:06:56.991121 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod939ed5f9_679d_44c4_8282_d1404d98b420.slice/crio-41a971de04948c9e44ce6dc40b3d77bba6e4a0cb17a05ba55bfb243374f2d86b WatchSource:0}: Error finding container 41a971de04948c9e44ce6dc40b3d77bba6e4a0cb17a05ba55bfb243374f2d86b: Status 404 returned error can't find the container with id 41a971de04948c9e44ce6dc40b3d77bba6e4a0cb17a05ba55bfb243374f2d86b Feb 02 07:06:56 crc kubenswrapper[4842]: W0202 07:06:56.994196 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod668f221e_e491_4ec6_9f40_82dd1afc3ac8.slice/crio-2c40c85d611d7d09a42f24bc8981993f6fd753b9a53c230d0563bacda87102bc WatchSource:0}: Error finding container 2c40c85d611d7d09a42f24bc8981993f6fd753b9a53c230d0563bacda87102bc: Status 404 returned error can't find the container with id 2c40c85d611d7d09a42f24bc8981993f6fd753b9a53c230d0563bacda87102bc Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.003152 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.003187 4842 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.003197 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.228071 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-17c9-account-create-update-hm58m"] Feb 02 07:06:57 crc kubenswrapper[4842]: W0202 07:06:57.259999 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9d15d01_9c12_4b4f_9cec_037a1d21fab1.slice/crio-9ccc4349841c5450246f1eb65b4db6e6964dabbd241a9da4c8ab5313470a2581 WatchSource:0}: Error finding container 9ccc4349841c5450246f1eb65b4db6e6964dabbd241a9da4c8ab5313470a2581: Status 404 returned error can't find the container with id 9ccc4349841c5450246f1eb65b4db6e6964dabbd241a9da4c8ab5313470a2581 Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.580536 4842 generic.go:334] "Generic (PLEG): container finished" podID="2d8715fd-8755-4bd6-82a7-bf49d61e1779" containerID="adafd15daec92386baa24cf42bc0363f97b26ac9307e8e8272e537e2c7e8b2cf" exitCode=0 Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.580581 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jph4l" event={"ID":"2d8715fd-8755-4bd6-82a7-bf49d61e1779","Type":"ContainerDied","Data":"adafd15daec92386baa24cf42bc0363f97b26ac9307e8e8272e537e2c7e8b2cf"} Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.580629 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jph4l" event={"ID":"2d8715fd-8755-4bd6-82a7-bf49d61e1779","Type":"ContainerStarted","Data":"0fde92f3b8f0ad9269fdb9699eb52b5f22ca179532eaf6391fcded5cb29f2ba6"} Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.582636 4842 generic.go:334] "Generic (PLEG): container finished" podID="52bba199-2794-4828-9a54-e1aac49fb223" containerID="7b7d5e5edb2af232c2055e5da49c69d329f4113726a849604a2b594aefa2f3af" exitCode=0 Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.582702 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-89ff-account-create-update-pb4bw" event={"ID":"52bba199-2794-4828-9a54-e1aac49fb223","Type":"ContainerDied","Data":"7b7d5e5edb2af232c2055e5da49c69d329f4113726a849604a2b594aefa2f3af"} Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.582725 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-89ff-account-create-update-pb4bw" event={"ID":"52bba199-2794-4828-9a54-e1aac49fb223","Type":"ContainerStarted","Data":"b9b079e5b40935f5c3957e2ff08d97c88f3c365e78a54eba6ef83e9680d55e18"} Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.584234 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-17c9-account-create-update-hm58m" event={"ID":"a9d15d01-9c12-4b4f-9cec-037a1d21fab1","Type":"ContainerStarted","Data":"e8f9c804c29efb0cbd22bbe4d584e668c739a0efdfc614e0546bb32ea70ef867"} Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.584256 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-17c9-account-create-update-hm58m" event={"ID":"a9d15d01-9c12-4b4f-9cec-037a1d21fab1","Type":"ContainerStarted","Data":"9ccc4349841c5450246f1eb65b4db6e6964dabbd241a9da4c8ab5313470a2581"} Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.585747 4842 generic.go:334] "Generic (PLEG): container finished" podID="668f221e-e491-4ec6-9f40-82dd1afc3ac8" containerID="ca50f3bd514767840a56ccfe9f58d3e7f3e73682b97d7191a9419836cd607b01" exitCode=0 Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.585776 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7f00-account-create-update-llc96" event={"ID":"668f221e-e491-4ec6-9f40-82dd1afc3ac8","Type":"ContainerDied","Data":"ca50f3bd514767840a56ccfe9f58d3e7f3e73682b97d7191a9419836cd607b01"} Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.585802 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7f00-account-create-update-llc96" event={"ID":"668f221e-e491-4ec6-9f40-82dd1afc3ac8","Type":"ContainerStarted","Data":"2c40c85d611d7d09a42f24bc8981993f6fd753b9a53c230d0563bacda87102bc"} Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.587240 4842 generic.go:334] "Generic (PLEG): container finished" podID="4b414999-f3d0-4101-abe7-ed8c7747ce5f" containerID="b2f7cb4727d9784f10ff6a0c8a30a31bb44be887023eca0a860978903f19daa6" exitCode=0 Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.587262 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dg9pd" event={"ID":"4b414999-f3d0-4101-abe7-ed8c7747ce5f","Type":"ContainerDied","Data":"b2f7cb4727d9784f10ff6a0c8a30a31bb44be887023eca0a860978903f19daa6"} Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.588613 4842 generic.go:334] "Generic (PLEG): container finished" podID="939ed5f9-679d-44c4-8282-d1404d98b420" containerID="2c7088cf1821b77c6f7eefcfe1152002a124d024b112d220292c3bfdaf924d4c" exitCode=0 Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.588656 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-79v8r" event={"ID":"939ed5f9-679d-44c4-8282-d1404d98b420","Type":"ContainerDied","Data":"2c7088cf1821b77c6f7eefcfe1152002a124d024b112d220292c3bfdaf924d4c"} Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.588672 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-79v8r" event={"ID":"939ed5f9-679d-44c4-8282-d1404d98b420","Type":"ContainerStarted","Data":"41a971de04948c9e44ce6dc40b3d77bba6e4a0cb17a05ba55bfb243374f2d86b"} Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.590765 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b469b995b-npwfd" event={"ID":"a18aba57-b830-47d3-9b18-8946414fdd1d","Type":"ContainerDied","Data":"c685a8dc8410d6a7a79b5205dd3ff23339631326844f2a5b84578d841706238e"} Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.590827 4842 scope.go:117] "RemoveContainer" containerID="f8f9e0a8b64ae08b996a6ff20de4cb61c2fe7c362caaa42c329de676a9077b38" Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.590781 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.619433 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-17c9-account-create-update-hm58m" podStartSLOduration=1.619414441 podStartE2EDuration="1.619414441s" podCreationTimestamp="2026-02-02 07:06:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:06:57.613473314 +0000 UTC m=+1242.990741226" watchObservedRunningTime="2026-02-02 07:06:57.619414441 +0000 UTC m=+1242.996682353" Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.731566 4842 scope.go:117] "RemoveContainer" containerID="6747e535436e2bdd0c46d5273f8b5a7d29b3c3f7226e94896a48a5bfcdb6a2d9" Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.733419 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7b469b995b-npwfd"] Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.741712 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7b469b995b-npwfd"] Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.831414 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.831651 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="09febcea-8bf3-43b8-b6ff-ae8a0e445519" containerName="glance-log" containerID="cri-o://5ef15884271c02db7ac2aacfcafc7eda559d7d1e5207b1cc74589dab6d9494ce" gracePeriod=30 Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.831763 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="09febcea-8bf3-43b8-b6ff-ae8a0e445519" containerName="glance-httpd" containerID="cri-o://8d3926fc2f7172c658b9b2069d4954fc955daf88fa215cbbf56fe1879ccec1b8" gracePeriod=30 Feb 02 07:06:58 crc kubenswrapper[4842]: I0202 07:06:58.605621 4842 generic.go:334] "Generic (PLEG): container finished" podID="a9d15d01-9c12-4b4f-9cec-037a1d21fab1" containerID="e8f9c804c29efb0cbd22bbe4d584e668c739a0efdfc614e0546bb32ea70ef867" exitCode=0 Feb 02 07:06:58 crc kubenswrapper[4842]: I0202 07:06:58.605681 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-17c9-account-create-update-hm58m" event={"ID":"a9d15d01-9c12-4b4f-9cec-037a1d21fab1","Type":"ContainerDied","Data":"e8f9c804c29efb0cbd22bbe4d584e668c739a0efdfc614e0546bb32ea70ef867"} Feb 02 07:06:58 crc kubenswrapper[4842]: I0202 07:06:58.612753 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"804c0232-0b21-4b4a-973e-620fef26b1de","Type":"ContainerStarted","Data":"23dd0ca466edc848ab9f75914f169da25ba7c3c7918e89f13ac53448e128d009"} Feb 02 07:06:58 crc kubenswrapper[4842]: I0202 07:06:58.612870 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="sg-core" containerID="cri-o://022aa50ba41d0a413d49d7816b95c9ce705b40b44d3e4b26928051ada603decd" gracePeriod=30 Feb 02 07:06:58 crc kubenswrapper[4842]: I0202 07:06:58.612895 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 07:06:58 crc kubenswrapper[4842]: I0202 07:06:58.612955 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="proxy-httpd" containerID="cri-o://23dd0ca466edc848ab9f75914f169da25ba7c3c7918e89f13ac53448e128d009" gracePeriod=30 Feb 02 07:06:58 crc kubenswrapper[4842]: I0202 07:06:58.613022 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="ceilometer-notification-agent" containerID="cri-o://36b2b05bbe375b399c98b67e29fc0579c7a94211ddd64f7ddba9592374c382bd" gracePeriod=30 Feb 02 07:06:58 crc kubenswrapper[4842]: I0202 07:06:58.613110 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="ceilometer-central-agent" containerID="cri-o://3a5cb3f49b99abe6192e05d777a57a2ec064de70a666aa2c8b933349f5030599" gracePeriod=30 Feb 02 07:06:58 crc kubenswrapper[4842]: I0202 07:06:58.628022 4842 generic.go:334] "Generic (PLEG): container finished" podID="09febcea-8bf3-43b8-b6ff-ae8a0e445519" containerID="5ef15884271c02db7ac2aacfcafc7eda559d7d1e5207b1cc74589dab6d9494ce" exitCode=143 Feb 02 07:06:58 crc kubenswrapper[4842]: I0202 07:06:58.628101 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09febcea-8bf3-43b8-b6ff-ae8a0e445519","Type":"ContainerDied","Data":"5ef15884271c02db7ac2aacfcafc7eda559d7d1e5207b1cc74589dab6d9494ce"} Feb 02 07:06:58 crc kubenswrapper[4842]: I0202 07:06:58.981246 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7f00-account-create-update-llc96" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.001172 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.085177527 podStartE2EDuration="7.00115542s" podCreationTimestamp="2026-02-02 07:06:52 +0000 UTC" firstStartedPulling="2026-02-02 07:06:53.354104836 +0000 UTC m=+1238.731372748" lastFinishedPulling="2026-02-02 07:06:58.270082729 +0000 UTC m=+1243.647350641" observedRunningTime="2026-02-02 07:06:58.648974216 +0000 UTC m=+1244.026242158" watchObservedRunningTime="2026-02-02 07:06:59.00115542 +0000 UTC m=+1244.378423332" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.044787 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28qpr\" (UniqueName: \"kubernetes.io/projected/668f221e-e491-4ec6-9f40-82dd1afc3ac8-kube-api-access-28qpr\") pod \"668f221e-e491-4ec6-9f40-82dd1afc3ac8\" (UID: \"668f221e-e491-4ec6-9f40-82dd1afc3ac8\") " Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.044852 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/668f221e-e491-4ec6-9f40-82dd1afc3ac8-operator-scripts\") pod \"668f221e-e491-4ec6-9f40-82dd1afc3ac8\" (UID: \"668f221e-e491-4ec6-9f40-82dd1afc3ac8\") " Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.046028 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/668f221e-e491-4ec6-9f40-82dd1afc3ac8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "668f221e-e491-4ec6-9f40-82dd1afc3ac8" (UID: "668f221e-e491-4ec6-9f40-82dd1afc3ac8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.058607 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/668f221e-e491-4ec6-9f40-82dd1afc3ac8-kube-api-access-28qpr" (OuterVolumeSpecName: "kube-api-access-28qpr") pod "668f221e-e491-4ec6-9f40-82dd1afc3ac8" (UID: "668f221e-e491-4ec6-9f40-82dd1afc3ac8"). InnerVolumeSpecName "kube-api-access-28qpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.147204 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28qpr\" (UniqueName: \"kubernetes.io/projected/668f221e-e491-4ec6-9f40-82dd1afc3ac8-kube-api-access-28qpr\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.147242 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/668f221e-e491-4ec6-9f40-82dd1afc3ac8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.212853 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dg9pd" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.218234 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jph4l" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.226348 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-79v8r" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.235409 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-89ff-account-create-update-pb4bw" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.250082 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d8715fd-8755-4bd6-82a7-bf49d61e1779-operator-scripts\") pod \"2d8715fd-8755-4bd6-82a7-bf49d61e1779\" (UID: \"2d8715fd-8755-4bd6-82a7-bf49d61e1779\") " Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.250168 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b414999-f3d0-4101-abe7-ed8c7747ce5f-operator-scripts\") pod \"4b414999-f3d0-4101-abe7-ed8c7747ce5f\" (UID: \"4b414999-f3d0-4101-abe7-ed8c7747ce5f\") " Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.250195 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpvzr\" (UniqueName: \"kubernetes.io/projected/2d8715fd-8755-4bd6-82a7-bf49d61e1779-kube-api-access-zpvzr\") pod \"2d8715fd-8755-4bd6-82a7-bf49d61e1779\" (UID: \"2d8715fd-8755-4bd6-82a7-bf49d61e1779\") " Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.250286 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5k4k\" (UniqueName: \"kubernetes.io/projected/4b414999-f3d0-4101-abe7-ed8c7747ce5f-kube-api-access-t5k4k\") pod \"4b414999-f3d0-4101-abe7-ed8c7747ce5f\" (UID: \"4b414999-f3d0-4101-abe7-ed8c7747ce5f\") " Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.254933 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b414999-f3d0-4101-abe7-ed8c7747ce5f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4b414999-f3d0-4101-abe7-ed8c7747ce5f" (UID: "4b414999-f3d0-4101-abe7-ed8c7747ce5f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.255308 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d8715fd-8755-4bd6-82a7-bf49d61e1779-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2d8715fd-8755-4bd6-82a7-bf49d61e1779" (UID: "2d8715fd-8755-4bd6-82a7-bf49d61e1779"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.256682 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b414999-f3d0-4101-abe7-ed8c7747ce5f-kube-api-access-t5k4k" (OuterVolumeSpecName: "kube-api-access-t5k4k") pod "4b414999-f3d0-4101-abe7-ed8c7747ce5f" (UID: "4b414999-f3d0-4101-abe7-ed8c7747ce5f"). InnerVolumeSpecName "kube-api-access-t5k4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.287571 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d8715fd-8755-4bd6-82a7-bf49d61e1779-kube-api-access-zpvzr" (OuterVolumeSpecName: "kube-api-access-zpvzr") pod "2d8715fd-8755-4bd6-82a7-bf49d61e1779" (UID: "2d8715fd-8755-4bd6-82a7-bf49d61e1779"). InnerVolumeSpecName "kube-api-access-zpvzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.352645 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq9fj\" (UniqueName: \"kubernetes.io/projected/52bba199-2794-4828-9a54-e1aac49fb223-kube-api-access-mq9fj\") pod \"52bba199-2794-4828-9a54-e1aac49fb223\" (UID: \"52bba199-2794-4828-9a54-e1aac49fb223\") " Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.352810 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/939ed5f9-679d-44c4-8282-d1404d98b420-operator-scripts\") pod \"939ed5f9-679d-44c4-8282-d1404d98b420\" (UID: \"939ed5f9-679d-44c4-8282-d1404d98b420\") " Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.352871 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52bba199-2794-4828-9a54-e1aac49fb223-operator-scripts\") pod \"52bba199-2794-4828-9a54-e1aac49fb223\" (UID: \"52bba199-2794-4828-9a54-e1aac49fb223\") " Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.352895 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4x9m\" (UniqueName: \"kubernetes.io/projected/939ed5f9-679d-44c4-8282-d1404d98b420-kube-api-access-m4x9m\") pod \"939ed5f9-679d-44c4-8282-d1404d98b420\" (UID: \"939ed5f9-679d-44c4-8282-d1404d98b420\") " Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.353200 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/939ed5f9-679d-44c4-8282-d1404d98b420-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "939ed5f9-679d-44c4-8282-d1404d98b420" (UID: "939ed5f9-679d-44c4-8282-d1404d98b420"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.353309 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52bba199-2794-4828-9a54-e1aac49fb223-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "52bba199-2794-4828-9a54-e1aac49fb223" (UID: "52bba199-2794-4828-9a54-e1aac49fb223"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.353936 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/939ed5f9-679d-44c4-8282-d1404d98b420-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.353953 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52bba199-2794-4828-9a54-e1aac49fb223-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.353962 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d8715fd-8755-4bd6-82a7-bf49d61e1779-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.353971 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b414999-f3d0-4101-abe7-ed8c7747ce5f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.353980 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpvzr\" (UniqueName: \"kubernetes.io/projected/2d8715fd-8755-4bd6-82a7-bf49d61e1779-kube-api-access-zpvzr\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.353990 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5k4k\" (UniqueName: \"kubernetes.io/projected/4b414999-f3d0-4101-abe7-ed8c7747ce5f-kube-api-access-t5k4k\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.355877 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52bba199-2794-4828-9a54-e1aac49fb223-kube-api-access-mq9fj" (OuterVolumeSpecName: "kube-api-access-mq9fj") pod "52bba199-2794-4828-9a54-e1aac49fb223" (UID: "52bba199-2794-4828-9a54-e1aac49fb223"). InnerVolumeSpecName "kube-api-access-mq9fj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.356491 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/939ed5f9-679d-44c4-8282-d1404d98b420-kube-api-access-m4x9m" (OuterVolumeSpecName: "kube-api-access-m4x9m") pod "939ed5f9-679d-44c4-8282-d1404d98b420" (UID: "939ed5f9-679d-44c4-8282-d1404d98b420"). InnerVolumeSpecName "kube-api-access-m4x9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.450011 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a18aba57-b830-47d3-9b18-8946414fdd1d" path="/var/lib/kubelet/pods/a18aba57-b830-47d3-9b18-8946414fdd1d/volumes" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.455810 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4x9m\" (UniqueName: \"kubernetes.io/projected/939ed5f9-679d-44c4-8282-d1404d98b420-kube-api-access-m4x9m\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.455861 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq9fj\" (UniqueName: \"kubernetes.io/projected/52bba199-2794-4828-9a54-e1aac49fb223-kube-api-access-mq9fj\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.641790 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dg9pd" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.641791 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dg9pd" event={"ID":"4b414999-f3d0-4101-abe7-ed8c7747ce5f","Type":"ContainerDied","Data":"94ef265414e26a0b5006140a913a8d7ff6850122bee0165ff7e8ae90e61983f0"} Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.642006 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94ef265414e26a0b5006140a913a8d7ff6850122bee0165ff7e8ae90e61983f0" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.645823 4842 generic.go:334] "Generic (PLEG): container finished" podID="804c0232-0b21-4b4a-973e-620fef26b1de" containerID="022aa50ba41d0a413d49d7816b95c9ce705b40b44d3e4b26928051ada603decd" exitCode=2 Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.645865 4842 generic.go:334] "Generic (PLEG): container finished" podID="804c0232-0b21-4b4a-973e-620fef26b1de" containerID="36b2b05bbe375b399c98b67e29fc0579c7a94211ddd64f7ddba9592374c382bd" exitCode=0 Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.645982 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"804c0232-0b21-4b4a-973e-620fef26b1de","Type":"ContainerDied","Data":"022aa50ba41d0a413d49d7816b95c9ce705b40b44d3e4b26928051ada603decd"} Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.646075 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"804c0232-0b21-4b4a-973e-620fef26b1de","Type":"ContainerDied","Data":"36b2b05bbe375b399c98b67e29fc0579c7a94211ddd64f7ddba9592374c382bd"} Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.649071 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-79v8r" event={"ID":"939ed5f9-679d-44c4-8282-d1404d98b420","Type":"ContainerDied","Data":"41a971de04948c9e44ce6dc40b3d77bba6e4a0cb17a05ba55bfb243374f2d86b"} Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.649130 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41a971de04948c9e44ce6dc40b3d77bba6e4a0cb17a05ba55bfb243374f2d86b" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.649129 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-79v8r" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.652951 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jph4l" event={"ID":"2d8715fd-8755-4bd6-82a7-bf49d61e1779","Type":"ContainerDied","Data":"0fde92f3b8f0ad9269fdb9699eb52b5f22ca179532eaf6391fcded5cb29f2ba6"} Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.652990 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fde92f3b8f0ad9269fdb9699eb52b5f22ca179532eaf6391fcded5cb29f2ba6" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.653005 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jph4l" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.655292 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-89ff-account-create-update-pb4bw" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.655323 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-89ff-account-create-update-pb4bw" event={"ID":"52bba199-2794-4828-9a54-e1aac49fb223","Type":"ContainerDied","Data":"b9b079e5b40935f5c3957e2ff08d97c88f3c365e78a54eba6ef83e9680d55e18"} Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.655360 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9b079e5b40935f5c3957e2ff08d97c88f3c365e78a54eba6ef83e9680d55e18" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.657370 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7f00-account-create-update-llc96" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.657935 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7f00-account-create-update-llc96" event={"ID":"668f221e-e491-4ec6-9f40-82dd1afc3ac8","Type":"ContainerDied","Data":"2c40c85d611d7d09a42f24bc8981993f6fd753b9a53c230d0563bacda87102bc"} Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.657964 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c40c85d611d7d09a42f24bc8981993f6fd753b9a53c230d0563bacda87102bc" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.976265 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-17c9-account-create-update-hm58m" Feb 02 07:07:00 crc kubenswrapper[4842]: I0202 07:07:00.064986 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6k8mp\" (UniqueName: \"kubernetes.io/projected/a9d15d01-9c12-4b4f-9cec-037a1d21fab1-kube-api-access-6k8mp\") pod \"a9d15d01-9c12-4b4f-9cec-037a1d21fab1\" (UID: \"a9d15d01-9c12-4b4f-9cec-037a1d21fab1\") " Feb 02 07:07:00 crc kubenswrapper[4842]: I0202 07:07:00.065117 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9d15d01-9c12-4b4f-9cec-037a1d21fab1-operator-scripts\") pod \"a9d15d01-9c12-4b4f-9cec-037a1d21fab1\" (UID: \"a9d15d01-9c12-4b4f-9cec-037a1d21fab1\") " Feb 02 07:07:00 crc kubenswrapper[4842]: I0202 07:07:00.065860 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9d15d01-9c12-4b4f-9cec-037a1d21fab1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9d15d01-9c12-4b4f-9cec-037a1d21fab1" (UID: "a9d15d01-9c12-4b4f-9cec-037a1d21fab1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:07:00 crc kubenswrapper[4842]: I0202 07:07:00.071797 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9d15d01-9c12-4b4f-9cec-037a1d21fab1-kube-api-access-6k8mp" (OuterVolumeSpecName: "kube-api-access-6k8mp") pod "a9d15d01-9c12-4b4f-9cec-037a1d21fab1" (UID: "a9d15d01-9c12-4b4f-9cec-037a1d21fab1"). InnerVolumeSpecName "kube-api-access-6k8mp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:07:00 crc kubenswrapper[4842]: I0202 07:07:00.167402 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9d15d01-9c12-4b4f-9cec-037a1d21fab1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:00 crc kubenswrapper[4842]: I0202 07:07:00.167448 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6k8mp\" (UniqueName: \"kubernetes.io/projected/a9d15d01-9c12-4b4f-9cec-037a1d21fab1-kube-api-access-6k8mp\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:00 crc kubenswrapper[4842]: I0202 07:07:00.461843 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:07:00 crc kubenswrapper[4842]: I0202 07:07:00.462196 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="74fb1197-2202-4b15-a858-05dd736a1a26" containerName="glance-httpd" containerID="cri-o://224fc5852a577215a4a41f26622ee8290bb52c1f1f725cc252747f84a03552e3" gracePeriod=30 Feb 02 07:07:00 crc kubenswrapper[4842]: I0202 07:07:00.462551 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="74fb1197-2202-4b15-a858-05dd736a1a26" containerName="glance-log" containerID="cri-o://17b5094d456c9e7ac0aee7bc704529e5e3cdad0cd41064b1ee27f8f438f68541" gracePeriod=30 Feb 02 07:07:00 crc kubenswrapper[4842]: I0202 07:07:00.665115 4842 generic.go:334] "Generic (PLEG): container finished" podID="74fb1197-2202-4b15-a858-05dd736a1a26" containerID="17b5094d456c9e7ac0aee7bc704529e5e3cdad0cd41064b1ee27f8f438f68541" exitCode=143 Feb 02 07:07:00 crc kubenswrapper[4842]: I0202 07:07:00.665167 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"74fb1197-2202-4b15-a858-05dd736a1a26","Type":"ContainerDied","Data":"17b5094d456c9e7ac0aee7bc704529e5e3cdad0cd41064b1ee27f8f438f68541"} Feb 02 07:07:00 crc kubenswrapper[4842]: I0202 07:07:00.666573 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-17c9-account-create-update-hm58m" event={"ID":"a9d15d01-9c12-4b4f-9cec-037a1d21fab1","Type":"ContainerDied","Data":"9ccc4349841c5450246f1eb65b4db6e6964dabbd241a9da4c8ab5313470a2581"} Feb 02 07:07:00 crc kubenswrapper[4842]: I0202 07:07:00.666605 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ccc4349841c5450246f1eb65b4db6e6964dabbd241a9da4c8ab5313470a2581" Feb 02 07:07:00 crc kubenswrapper[4842]: I0202 07:07:00.666699 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-17c9-account-create-update-hm58m" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.190474 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6htfz"] Feb 02 07:07:01 crc kubenswrapper[4842]: E0202 07:07:01.191344 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a18aba57-b830-47d3-9b18-8946414fdd1d" containerName="neutron-api" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.191367 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="a18aba57-b830-47d3-9b18-8946414fdd1d" containerName="neutron-api" Feb 02 07:07:01 crc kubenswrapper[4842]: E0202 07:07:01.191385 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="939ed5f9-679d-44c4-8282-d1404d98b420" containerName="mariadb-database-create" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.191410 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="939ed5f9-679d-44c4-8282-d1404d98b420" containerName="mariadb-database-create" Feb 02 07:07:01 crc kubenswrapper[4842]: E0202 07:07:01.191434 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d8715fd-8755-4bd6-82a7-bf49d61e1779" containerName="mariadb-database-create" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.191442 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d8715fd-8755-4bd6-82a7-bf49d61e1779" containerName="mariadb-database-create" Feb 02 07:07:01 crc kubenswrapper[4842]: E0202 07:07:01.191459 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52bba199-2794-4828-9a54-e1aac49fb223" containerName="mariadb-account-create-update" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.191467 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="52bba199-2794-4828-9a54-e1aac49fb223" containerName="mariadb-account-create-update" Feb 02 07:07:01 crc kubenswrapper[4842]: E0202 07:07:01.191490 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9d15d01-9c12-4b4f-9cec-037a1d21fab1" containerName="mariadb-account-create-update" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.191498 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9d15d01-9c12-4b4f-9cec-037a1d21fab1" containerName="mariadb-account-create-update" Feb 02 07:07:01 crc kubenswrapper[4842]: E0202 07:07:01.191511 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b414999-f3d0-4101-abe7-ed8c7747ce5f" containerName="mariadb-database-create" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.191519 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b414999-f3d0-4101-abe7-ed8c7747ce5f" containerName="mariadb-database-create" Feb 02 07:07:01 crc kubenswrapper[4842]: E0202 07:07:01.191541 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a18aba57-b830-47d3-9b18-8946414fdd1d" containerName="neutron-httpd" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.191549 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="a18aba57-b830-47d3-9b18-8946414fdd1d" containerName="neutron-httpd" Feb 02 07:07:01 crc kubenswrapper[4842]: E0202 07:07:01.191570 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="668f221e-e491-4ec6-9f40-82dd1afc3ac8" containerName="mariadb-account-create-update" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.191578 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="668f221e-e491-4ec6-9f40-82dd1afc3ac8" containerName="mariadb-account-create-update" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.191783 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="a18aba57-b830-47d3-9b18-8946414fdd1d" containerName="neutron-httpd" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.191798 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="52bba199-2794-4828-9a54-e1aac49fb223" containerName="mariadb-account-create-update" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.191808 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="939ed5f9-679d-44c4-8282-d1404d98b420" containerName="mariadb-database-create" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.191818 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d8715fd-8755-4bd6-82a7-bf49d61e1779" containerName="mariadb-database-create" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.191832 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="a18aba57-b830-47d3-9b18-8946414fdd1d" containerName="neutron-api" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.191848 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="668f221e-e491-4ec6-9f40-82dd1afc3ac8" containerName="mariadb-account-create-update" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.191863 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9d15d01-9c12-4b4f-9cec-037a1d21fab1" containerName="mariadb-account-create-update" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.191883 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b414999-f3d0-4101-abe7-ed8c7747ce5f" containerName="mariadb-database-create" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.192633 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6htfz" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.196324 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.196619 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.196791 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zt7nb" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.203844 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6htfz"] Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.286943 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-scripts\") pod \"nova-cell0-conductor-db-sync-6htfz\" (UID: \"fb013bc6-805e-43d5-95f8-98597c33fa9e\") " pod="openstack/nova-cell0-conductor-db-sync-6htfz" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.287003 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcz9l\" (UniqueName: \"kubernetes.io/projected/fb013bc6-805e-43d5-95f8-98597c33fa9e-kube-api-access-mcz9l\") pod \"nova-cell0-conductor-db-sync-6htfz\" (UID: \"fb013bc6-805e-43d5-95f8-98597c33fa9e\") " pod="openstack/nova-cell0-conductor-db-sync-6htfz" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.287100 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-config-data\") pod \"nova-cell0-conductor-db-sync-6htfz\" (UID: \"fb013bc6-805e-43d5-95f8-98597c33fa9e\") " pod="openstack/nova-cell0-conductor-db-sync-6htfz" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.287299 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6htfz\" (UID: \"fb013bc6-805e-43d5-95f8-98597c33fa9e\") " pod="openstack/nova-cell0-conductor-db-sync-6htfz" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.389107 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-scripts\") pod \"nova-cell0-conductor-db-sync-6htfz\" (UID: \"fb013bc6-805e-43d5-95f8-98597c33fa9e\") " pod="openstack/nova-cell0-conductor-db-sync-6htfz" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.389165 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcz9l\" (UniqueName: \"kubernetes.io/projected/fb013bc6-805e-43d5-95f8-98597c33fa9e-kube-api-access-mcz9l\") pod \"nova-cell0-conductor-db-sync-6htfz\" (UID: \"fb013bc6-805e-43d5-95f8-98597c33fa9e\") " pod="openstack/nova-cell0-conductor-db-sync-6htfz" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.389242 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-config-data\") pod \"nova-cell0-conductor-db-sync-6htfz\" (UID: \"fb013bc6-805e-43d5-95f8-98597c33fa9e\") " pod="openstack/nova-cell0-conductor-db-sync-6htfz" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.389343 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6htfz\" (UID: \"fb013bc6-805e-43d5-95f8-98597c33fa9e\") " pod="openstack/nova-cell0-conductor-db-sync-6htfz" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.395248 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6htfz\" (UID: \"fb013bc6-805e-43d5-95f8-98597c33fa9e\") " pod="openstack/nova-cell0-conductor-db-sync-6htfz" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.395833 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-config-data\") pod \"nova-cell0-conductor-db-sync-6htfz\" (UID: \"fb013bc6-805e-43d5-95f8-98597c33fa9e\") " pod="openstack/nova-cell0-conductor-db-sync-6htfz" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.399699 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-scripts\") pod \"nova-cell0-conductor-db-sync-6htfz\" (UID: \"fb013bc6-805e-43d5-95f8-98597c33fa9e\") " pod="openstack/nova-cell0-conductor-db-sync-6htfz" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.418268 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcz9l\" (UniqueName: \"kubernetes.io/projected/fb013bc6-805e-43d5-95f8-98597c33fa9e-kube-api-access-mcz9l\") pod \"nova-cell0-conductor-db-sync-6htfz\" (UID: \"fb013bc6-805e-43d5-95f8-98597c33fa9e\") " pod="openstack/nova-cell0-conductor-db-sync-6htfz" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.517798 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6htfz" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.551040 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.591845 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7x95\" (UniqueName: \"kubernetes.io/projected/09febcea-8bf3-43b8-b6ff-ae8a0e445519-kube-api-access-m7x95\") pod \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.591906 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09febcea-8bf3-43b8-b6ff-ae8a0e445519-httpd-run\") pod \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.591938 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-combined-ca-bundle\") pod \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.591961 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-scripts\") pod \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.592421 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09febcea-8bf3-43b8-b6ff-ae8a0e445519-logs\") pod \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.592452 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-public-tls-certs\") pod \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.592601 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-config-data\") pod \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.592662 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.597704 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-scripts" (OuterVolumeSpecName: "scripts") pod "09febcea-8bf3-43b8-b6ff-ae8a0e445519" (UID: "09febcea-8bf3-43b8-b6ff-ae8a0e445519"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.598609 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09febcea-8bf3-43b8-b6ff-ae8a0e445519-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "09febcea-8bf3-43b8-b6ff-ae8a0e445519" (UID: "09febcea-8bf3-43b8-b6ff-ae8a0e445519"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.599025 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09febcea-8bf3-43b8-b6ff-ae8a0e445519-logs" (OuterVolumeSpecName: "logs") pod "09febcea-8bf3-43b8-b6ff-ae8a0e445519" (UID: "09febcea-8bf3-43b8-b6ff-ae8a0e445519"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.600770 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "09febcea-8bf3-43b8-b6ff-ae8a0e445519" (UID: "09febcea-8bf3-43b8-b6ff-ae8a0e445519"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.602402 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09febcea-8bf3-43b8-b6ff-ae8a0e445519-kube-api-access-m7x95" (OuterVolumeSpecName: "kube-api-access-m7x95") pod "09febcea-8bf3-43b8-b6ff-ae8a0e445519" (UID: "09febcea-8bf3-43b8-b6ff-ae8a0e445519"). InnerVolumeSpecName "kube-api-access-m7x95". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.625352 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09febcea-8bf3-43b8-b6ff-ae8a0e445519" (UID: "09febcea-8bf3-43b8-b6ff-ae8a0e445519"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.662741 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-config-data" (OuterVolumeSpecName: "config-data") pod "09febcea-8bf3-43b8-b6ff-ae8a0e445519" (UID: "09febcea-8bf3-43b8-b6ff-ae8a0e445519"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.685839 4842 generic.go:334] "Generic (PLEG): container finished" podID="09febcea-8bf3-43b8-b6ff-ae8a0e445519" containerID="8d3926fc2f7172c658b9b2069d4954fc955daf88fa215cbbf56fe1879ccec1b8" exitCode=0 Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.685920 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09febcea-8bf3-43b8-b6ff-ae8a0e445519","Type":"ContainerDied","Data":"8d3926fc2f7172c658b9b2069d4954fc955daf88fa215cbbf56fe1879ccec1b8"} Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.685947 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09febcea-8bf3-43b8-b6ff-ae8a0e445519","Type":"ContainerDied","Data":"a5ef0c57463087c53e29eaaeb479b34c51cb5e6f894ab3af4029762d8f230dca"} Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.685970 4842 scope.go:117] "RemoveContainer" containerID="8d3926fc2f7172c658b9b2069d4954fc955daf88fa215cbbf56fe1879ccec1b8" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.686086 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.692160 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "09febcea-8bf3-43b8-b6ff-ae8a0e445519" (UID: "09febcea-8bf3-43b8-b6ff-ae8a0e445519"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.696278 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7x95\" (UniqueName: \"kubernetes.io/projected/09febcea-8bf3-43b8-b6ff-ae8a0e445519-kube-api-access-m7x95\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.696301 4842 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09febcea-8bf3-43b8-b6ff-ae8a0e445519-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.696318 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.696327 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.696336 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09febcea-8bf3-43b8-b6ff-ae8a0e445519-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.696345 4842 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.696353 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.696379 4842 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.723258 4842 scope.go:117] "RemoveContainer" containerID="5ef15884271c02db7ac2aacfcafc7eda559d7d1e5207b1cc74589dab6d9494ce" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.731082 4842 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.744894 4842 scope.go:117] "RemoveContainer" containerID="8d3926fc2f7172c658b9b2069d4954fc955daf88fa215cbbf56fe1879ccec1b8" Feb 02 07:07:01 crc kubenswrapper[4842]: E0202 07:07:01.745322 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d3926fc2f7172c658b9b2069d4954fc955daf88fa215cbbf56fe1879ccec1b8\": container with ID starting with 8d3926fc2f7172c658b9b2069d4954fc955daf88fa215cbbf56fe1879ccec1b8 not found: ID does not exist" containerID="8d3926fc2f7172c658b9b2069d4954fc955daf88fa215cbbf56fe1879ccec1b8" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.745361 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d3926fc2f7172c658b9b2069d4954fc955daf88fa215cbbf56fe1879ccec1b8"} err="failed to get container status \"8d3926fc2f7172c658b9b2069d4954fc955daf88fa215cbbf56fe1879ccec1b8\": rpc error: code = NotFound desc = could not find container \"8d3926fc2f7172c658b9b2069d4954fc955daf88fa215cbbf56fe1879ccec1b8\": container with ID starting with 8d3926fc2f7172c658b9b2069d4954fc955daf88fa215cbbf56fe1879ccec1b8 not found: ID does not exist" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.745407 4842 scope.go:117] "RemoveContainer" containerID="5ef15884271c02db7ac2aacfcafc7eda559d7d1e5207b1cc74589dab6d9494ce" Feb 02 07:07:01 crc kubenswrapper[4842]: E0202 07:07:01.745862 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ef15884271c02db7ac2aacfcafc7eda559d7d1e5207b1cc74589dab6d9494ce\": container with ID starting with 5ef15884271c02db7ac2aacfcafc7eda559d7d1e5207b1cc74589dab6d9494ce not found: ID does not exist" containerID="5ef15884271c02db7ac2aacfcafc7eda559d7d1e5207b1cc74589dab6d9494ce" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.745914 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ef15884271c02db7ac2aacfcafc7eda559d7d1e5207b1cc74589dab6d9494ce"} err="failed to get container status \"5ef15884271c02db7ac2aacfcafc7eda559d7d1e5207b1cc74589dab6d9494ce\": rpc error: code = NotFound desc = could not find container \"5ef15884271c02db7ac2aacfcafc7eda559d7d1e5207b1cc74589dab6d9494ce\": container with ID starting with 5ef15884271c02db7ac2aacfcafc7eda559d7d1e5207b1cc74589dab6d9494ce not found: ID does not exist" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.799371 4842 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.006402 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6htfz"] Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.028354 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.050414 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.067562 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:07:02 crc kubenswrapper[4842]: E0202 07:07:02.068124 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09febcea-8bf3-43b8-b6ff-ae8a0e445519" containerName="glance-log" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.068153 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="09febcea-8bf3-43b8-b6ff-ae8a0e445519" containerName="glance-log" Feb 02 07:07:02 crc kubenswrapper[4842]: E0202 07:07:02.068178 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09febcea-8bf3-43b8-b6ff-ae8a0e445519" containerName="glance-httpd" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.068184 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="09febcea-8bf3-43b8-b6ff-ae8a0e445519" containerName="glance-httpd" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.068805 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="09febcea-8bf3-43b8-b6ff-ae8a0e445519" containerName="glance-log" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.068860 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="09febcea-8bf3-43b8-b6ff-ae8a0e445519" containerName="glance-httpd" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.069939 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.072852 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.073014 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.081131 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.104112 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.104154 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.104208 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-scripts\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.104245 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9pr5\" (UniqueName: \"kubernetes.io/projected/34f55116-a518-4f21-8816-6f8232a6f68d-kube-api-access-r9pr5\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.104287 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34f55116-a518-4f21-8816-6f8232a6f68d-logs\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.104305 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34f55116-a518-4f21-8816-6f8232a6f68d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.104327 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-config-data\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.104351 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.205867 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.205932 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.205955 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.206008 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-scripts\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.206029 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9pr5\" (UniqueName: \"kubernetes.io/projected/34f55116-a518-4f21-8816-6f8232a6f68d-kube-api-access-r9pr5\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.206091 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34f55116-a518-4f21-8816-6f8232a6f68d-logs\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.206111 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34f55116-a518-4f21-8816-6f8232a6f68d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.206134 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-config-data\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.206429 4842 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.207384 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34f55116-a518-4f21-8816-6f8232a6f68d-logs\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.207622 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34f55116-a518-4f21-8816-6f8232a6f68d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.211348 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-config-data\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.212525 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.213602 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-scripts\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.214085 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.228864 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9pr5\" (UniqueName: \"kubernetes.io/projected/34f55116-a518-4f21-8816-6f8232a6f68d-kube-api-access-r9pr5\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.236672 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.413075 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.705982 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6htfz" event={"ID":"fb013bc6-805e-43d5-95f8-98597c33fa9e","Type":"ContainerStarted","Data":"90c87f6f53b22b92a4b5061d88a8063f32c54f968d8334ec9cca4c935c7373bc"} Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.763756 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:07:02 crc kubenswrapper[4842]: W0202 07:07:02.768526 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34f55116_a518_4f21_8816_6f8232a6f68d.slice/crio-03d59292614dd942c7945dc3ee9854947498f4230085fae20f5c0d549dbedbf1 WatchSource:0}: Error finding container 03d59292614dd942c7945dc3ee9854947498f4230085fae20f5c0d549dbedbf1: Status 404 returned error can't find the container with id 03d59292614dd942c7945dc3ee9854947498f4230085fae20f5c0d549dbedbf1 Feb 02 07:07:03 crc kubenswrapper[4842]: I0202 07:07:03.448361 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09febcea-8bf3-43b8-b6ff-ae8a0e445519" path="/var/lib/kubelet/pods/09febcea-8bf3-43b8-b6ff-ae8a0e445519/volumes" Feb 02 07:07:03 crc kubenswrapper[4842]: I0202 07:07:03.726649 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"34f55116-a518-4f21-8816-6f8232a6f68d","Type":"ContainerStarted","Data":"c593d09b2735487782551786767a4ed77fad095c2d0a78c5ed62f1b78de5ce7e"} Feb 02 07:07:03 crc kubenswrapper[4842]: I0202 07:07:03.726687 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"34f55116-a518-4f21-8816-6f8232a6f68d","Type":"ContainerStarted","Data":"03d59292614dd942c7945dc3ee9854947498f4230085fae20f5c0d549dbedbf1"} Feb 02 07:07:03 crc kubenswrapper[4842]: I0202 07:07:03.734700 4842 generic.go:334] "Generic (PLEG): container finished" podID="74fb1197-2202-4b15-a858-05dd736a1a26" containerID="224fc5852a577215a4a41f26622ee8290bb52c1f1f725cc252747f84a03552e3" exitCode=0 Feb 02 07:07:03 crc kubenswrapper[4842]: I0202 07:07:03.734743 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"74fb1197-2202-4b15-a858-05dd736a1a26","Type":"ContainerDied","Data":"224fc5852a577215a4a41f26622ee8290bb52c1f1f725cc252747f84a03552e3"} Feb 02 07:07:03 crc kubenswrapper[4842]: I0202 07:07:03.737493 4842 generic.go:334] "Generic (PLEG): container finished" podID="804c0232-0b21-4b4a-973e-620fef26b1de" containerID="3a5cb3f49b99abe6192e05d777a57a2ec064de70a666aa2c8b933349f5030599" exitCode=0 Feb 02 07:07:03 crc kubenswrapper[4842]: I0202 07:07:03.737521 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"804c0232-0b21-4b4a-973e-620fef26b1de","Type":"ContainerDied","Data":"3a5cb3f49b99abe6192e05d777a57a2ec064de70a666aa2c8b933349f5030599"} Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.030058 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.136617 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74fb1197-2202-4b15-a858-05dd736a1a26-logs\") pod \"74fb1197-2202-4b15-a858-05dd736a1a26\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.136739 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/74fb1197-2202-4b15-a858-05dd736a1a26-httpd-run\") pod \"74fb1197-2202-4b15-a858-05dd736a1a26\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.136801 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-internal-tls-certs\") pod \"74fb1197-2202-4b15-a858-05dd736a1a26\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.136831 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-scripts\") pod \"74fb1197-2202-4b15-a858-05dd736a1a26\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.136904 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sx9t\" (UniqueName: \"kubernetes.io/projected/74fb1197-2202-4b15-a858-05dd736a1a26-kube-api-access-9sx9t\") pod \"74fb1197-2202-4b15-a858-05dd736a1a26\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.136945 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"74fb1197-2202-4b15-a858-05dd736a1a26\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.136973 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-config-data\") pod \"74fb1197-2202-4b15-a858-05dd736a1a26\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.137045 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-combined-ca-bundle\") pod \"74fb1197-2202-4b15-a858-05dd736a1a26\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.144539 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-scripts" (OuterVolumeSpecName: "scripts") pod "74fb1197-2202-4b15-a858-05dd736a1a26" (UID: "74fb1197-2202-4b15-a858-05dd736a1a26"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.146104 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74fb1197-2202-4b15-a858-05dd736a1a26-logs" (OuterVolumeSpecName: "logs") pod "74fb1197-2202-4b15-a858-05dd736a1a26" (UID: "74fb1197-2202-4b15-a858-05dd736a1a26"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.146324 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74fb1197-2202-4b15-a858-05dd736a1a26-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "74fb1197-2202-4b15-a858-05dd736a1a26" (UID: "74fb1197-2202-4b15-a858-05dd736a1a26"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.147406 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "74fb1197-2202-4b15-a858-05dd736a1a26" (UID: "74fb1197-2202-4b15-a858-05dd736a1a26"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.149417 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74fb1197-2202-4b15-a858-05dd736a1a26-kube-api-access-9sx9t" (OuterVolumeSpecName: "kube-api-access-9sx9t") pod "74fb1197-2202-4b15-a858-05dd736a1a26" (UID: "74fb1197-2202-4b15-a858-05dd736a1a26"). InnerVolumeSpecName "kube-api-access-9sx9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.175751 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74fb1197-2202-4b15-a858-05dd736a1a26" (UID: "74fb1197-2202-4b15-a858-05dd736a1a26"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.213375 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-config-data" (OuterVolumeSpecName: "config-data") pod "74fb1197-2202-4b15-a858-05dd736a1a26" (UID: "74fb1197-2202-4b15-a858-05dd736a1a26"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.236684 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "74fb1197-2202-4b15-a858-05dd736a1a26" (UID: "74fb1197-2202-4b15-a858-05dd736a1a26"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.239381 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sx9t\" (UniqueName: \"kubernetes.io/projected/74fb1197-2202-4b15-a858-05dd736a1a26-kube-api-access-9sx9t\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.239453 4842 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.239466 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.239475 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.239487 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74fb1197-2202-4b15-a858-05dd736a1a26-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.239496 4842 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/74fb1197-2202-4b15-a858-05dd736a1a26-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.239503 4842 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.239512 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.257687 4842 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.341379 4842 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.772991 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"34f55116-a518-4f21-8816-6f8232a6f68d","Type":"ContainerStarted","Data":"72e60f391adc327a7666947b2251ee7da0c5b5a42927991c1ba5e739d160e596"} Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.775472 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"74fb1197-2202-4b15-a858-05dd736a1a26","Type":"ContainerDied","Data":"c3a9d9eee3d9319f1e0b533f2cb62666947fc026870c7a05529e2c7e13ac265d"} Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.775534 4842 scope.go:117] "RemoveContainer" containerID="224fc5852a577215a4a41f26622ee8290bb52c1f1f725cc252747f84a03552e3" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.775637 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.809315 4842 scope.go:117] "RemoveContainer" containerID="17b5094d456c9e7ac0aee7bc704529e5e3cdad0cd41064b1ee27f8f438f68541" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.809961 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=2.809941125 podStartE2EDuration="2.809941125s" podCreationTimestamp="2026-02-02 07:07:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:07:04.793931368 +0000 UTC m=+1250.171199280" watchObservedRunningTime="2026-02-02 07:07:04.809941125 +0000 UTC m=+1250.187209057" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.858245 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.864024 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.891777 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:07:04 crc kubenswrapper[4842]: E0202 07:07:04.892259 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74fb1197-2202-4b15-a858-05dd736a1a26" containerName="glance-log" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.892280 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="74fb1197-2202-4b15-a858-05dd736a1a26" containerName="glance-log" Feb 02 07:07:04 crc kubenswrapper[4842]: E0202 07:07:04.892303 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74fb1197-2202-4b15-a858-05dd736a1a26" containerName="glance-httpd" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.892312 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="74fb1197-2202-4b15-a858-05dd736a1a26" containerName="glance-httpd" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.892575 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="74fb1197-2202-4b15-a858-05dd736a1a26" containerName="glance-httpd" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.892600 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="74fb1197-2202-4b15-a858-05dd736a1a26" containerName="glance-log" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.893711 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.896826 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.896842 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.921661 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.069050 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.069114 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.069337 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c96a7e1-78c3-449d-9200-735db4ee7086-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.069465 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.069594 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.069637 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rq6l\" (UniqueName: \"kubernetes.io/projected/6c96a7e1-78c3-449d-9200-735db4ee7086-kube-api-access-9rq6l\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.069788 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c96a7e1-78c3-449d-9200-735db4ee7086-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.069851 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.172058 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c96a7e1-78c3-449d-9200-735db4ee7086-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.172128 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.172204 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.172254 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.172279 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c96a7e1-78c3-449d-9200-735db4ee7086-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.172315 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.172365 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.172392 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rq6l\" (UniqueName: \"kubernetes.io/projected/6c96a7e1-78c3-449d-9200-735db4ee7086-kube-api-access-9rq6l\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.172589 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c96a7e1-78c3-449d-9200-735db4ee7086-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.172645 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c96a7e1-78c3-449d-9200-735db4ee7086-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.172656 4842 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.176941 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.177292 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.177383 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.177580 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.200300 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rq6l\" (UniqueName: \"kubernetes.io/projected/6c96a7e1-78c3-449d-9200-735db4ee7086-kube-api-access-9rq6l\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.203599 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.214945 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.445430 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74fb1197-2202-4b15-a858-05dd736a1a26" path="/var/lib/kubelet/pods/74fb1197-2202-4b15-a858-05dd736a1a26/volumes" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.746887 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:07:09 crc kubenswrapper[4842]: I0202 07:07:09.825570 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c96a7e1-78c3-449d-9200-735db4ee7086","Type":"ContainerStarted","Data":"1eecf23079bd634775107b900580aa4bb87379a656bc114e56acf8d85609c009"} Feb 02 07:07:10 crc kubenswrapper[4842]: I0202 07:07:10.838277 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c96a7e1-78c3-449d-9200-735db4ee7086","Type":"ContainerStarted","Data":"50694d5591176c65770672c30837d60f3438d04ee3ca91b5bc53b0366f9835df"} Feb 02 07:07:10 crc kubenswrapper[4842]: I0202 07:07:10.838747 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c96a7e1-78c3-449d-9200-735db4ee7086","Type":"ContainerStarted","Data":"baeb51b0b4bb9444bd98551a3cc3dcb68f182ab93c0b62223c4c0a0707790ceb"} Feb 02 07:07:10 crc kubenswrapper[4842]: I0202 07:07:10.846419 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6htfz" event={"ID":"fb013bc6-805e-43d5-95f8-98597c33fa9e","Type":"ContainerStarted","Data":"f5f4ebc4957f3bd8515b3e4a7d7bf4b7c05ae94bf9d531ffc8914bcdc9bde611"} Feb 02 07:07:10 crc kubenswrapper[4842]: I0202 07:07:10.867441 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.867422218 podStartE2EDuration="6.867422218s" podCreationTimestamp="2026-02-02 07:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:07:10.860764443 +0000 UTC m=+1256.238032365" watchObservedRunningTime="2026-02-02 07:07:10.867422218 +0000 UTC m=+1256.244690150" Feb 02 07:07:10 crc kubenswrapper[4842]: I0202 07:07:10.888101 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-6htfz" podStartSLOduration=2.169806867 podStartE2EDuration="9.88808215s" podCreationTimestamp="2026-02-02 07:07:01 +0000 UTC" firstStartedPulling="2026-02-02 07:07:01.996911099 +0000 UTC m=+1247.374179011" lastFinishedPulling="2026-02-02 07:07:09.715186352 +0000 UTC m=+1255.092454294" observedRunningTime="2026-02-02 07:07:10.886358137 +0000 UTC m=+1256.263626059" watchObservedRunningTime="2026-02-02 07:07:10.88808215 +0000 UTC m=+1256.265350062" Feb 02 07:07:12 crc kubenswrapper[4842]: I0202 07:07:12.146762 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:07:12 crc kubenswrapper[4842]: I0202 07:07:12.147289 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:07:12 crc kubenswrapper[4842]: I0202 07:07:12.413849 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 07:07:12 crc kubenswrapper[4842]: I0202 07:07:12.413917 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 07:07:12 crc kubenswrapper[4842]: I0202 07:07:12.476326 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 07:07:12 crc kubenswrapper[4842]: I0202 07:07:12.493376 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 07:07:12 crc kubenswrapper[4842]: I0202 07:07:12.869880 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 07:07:12 crc kubenswrapper[4842]: I0202 07:07:12.869927 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 07:07:14 crc kubenswrapper[4842]: I0202 07:07:14.689548 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 07:07:14 crc kubenswrapper[4842]: I0202 07:07:14.699862 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 07:07:15 crc kubenswrapper[4842]: I0202 07:07:15.215909 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 07:07:15 crc kubenswrapper[4842]: I0202 07:07:15.215984 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 07:07:15 crc kubenswrapper[4842]: I0202 07:07:15.267868 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 07:07:15 crc kubenswrapper[4842]: I0202 07:07:15.278979 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 07:07:15 crc kubenswrapper[4842]: I0202 07:07:15.898863 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 07:07:15 crc kubenswrapper[4842]: I0202 07:07:15.899100 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 07:07:17 crc kubenswrapper[4842]: I0202 07:07:17.672894 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 07:07:17 crc kubenswrapper[4842]: I0202 07:07:17.674035 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 07:07:19 crc kubenswrapper[4842]: I0202 07:07:19.952583 4842 generic.go:334] "Generic (PLEG): container finished" podID="fb013bc6-805e-43d5-95f8-98597c33fa9e" containerID="f5f4ebc4957f3bd8515b3e4a7d7bf4b7c05ae94bf9d531ffc8914bcdc9bde611" exitCode=0 Feb 02 07:07:19 crc kubenswrapper[4842]: I0202 07:07:19.952615 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6htfz" event={"ID":"fb013bc6-805e-43d5-95f8-98597c33fa9e","Type":"ContainerDied","Data":"f5f4ebc4957f3bd8515b3e4a7d7bf4b7c05ae94bf9d531ffc8914bcdc9bde611"} Feb 02 07:07:21 crc kubenswrapper[4842]: I0202 07:07:21.423805 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6htfz" Feb 02 07:07:21 crc kubenswrapper[4842]: I0202 07:07:21.575881 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-combined-ca-bundle\") pod \"fb013bc6-805e-43d5-95f8-98597c33fa9e\" (UID: \"fb013bc6-805e-43d5-95f8-98597c33fa9e\") " Feb 02 07:07:21 crc kubenswrapper[4842]: I0202 07:07:21.576019 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-config-data\") pod \"fb013bc6-805e-43d5-95f8-98597c33fa9e\" (UID: \"fb013bc6-805e-43d5-95f8-98597c33fa9e\") " Feb 02 07:07:21 crc kubenswrapper[4842]: I0202 07:07:21.576271 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-scripts\") pod \"fb013bc6-805e-43d5-95f8-98597c33fa9e\" (UID: \"fb013bc6-805e-43d5-95f8-98597c33fa9e\") " Feb 02 07:07:21 crc kubenswrapper[4842]: I0202 07:07:21.576340 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcz9l\" (UniqueName: \"kubernetes.io/projected/fb013bc6-805e-43d5-95f8-98597c33fa9e-kube-api-access-mcz9l\") pod \"fb013bc6-805e-43d5-95f8-98597c33fa9e\" (UID: \"fb013bc6-805e-43d5-95f8-98597c33fa9e\") " Feb 02 07:07:21 crc kubenswrapper[4842]: I0202 07:07:21.582975 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-scripts" (OuterVolumeSpecName: "scripts") pod "fb013bc6-805e-43d5-95f8-98597c33fa9e" (UID: "fb013bc6-805e-43d5-95f8-98597c33fa9e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:21 crc kubenswrapper[4842]: I0202 07:07:21.584355 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb013bc6-805e-43d5-95f8-98597c33fa9e-kube-api-access-mcz9l" (OuterVolumeSpecName: "kube-api-access-mcz9l") pod "fb013bc6-805e-43d5-95f8-98597c33fa9e" (UID: "fb013bc6-805e-43d5-95f8-98597c33fa9e"). InnerVolumeSpecName "kube-api-access-mcz9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:07:21 crc kubenswrapper[4842]: I0202 07:07:21.631206 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb013bc6-805e-43d5-95f8-98597c33fa9e" (UID: "fb013bc6-805e-43d5-95f8-98597c33fa9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:21 crc kubenswrapper[4842]: I0202 07:07:21.632733 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-config-data" (OuterVolumeSpecName: "config-data") pod "fb013bc6-805e-43d5-95f8-98597c33fa9e" (UID: "fb013bc6-805e-43d5-95f8-98597c33fa9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:21 crc kubenswrapper[4842]: I0202 07:07:21.679793 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:21 crc kubenswrapper[4842]: I0202 07:07:21.679839 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:21 crc kubenswrapper[4842]: I0202 07:07:21.679859 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:21 crc kubenswrapper[4842]: I0202 07:07:21.679878 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcz9l\" (UniqueName: \"kubernetes.io/projected/fb013bc6-805e-43d5-95f8-98597c33fa9e-kube-api-access-mcz9l\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:21 crc kubenswrapper[4842]: I0202 07:07:21.986433 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6htfz" event={"ID":"fb013bc6-805e-43d5-95f8-98597c33fa9e","Type":"ContainerDied","Data":"90c87f6f53b22b92a4b5061d88a8063f32c54f968d8334ec9cca4c935c7373bc"} Feb 02 07:07:21 crc kubenswrapper[4842]: I0202 07:07:21.986484 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90c87f6f53b22b92a4b5061d88a8063f32c54f968d8334ec9cca4c935c7373bc" Feb 02 07:07:21 crc kubenswrapper[4842]: I0202 07:07:21.986594 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6htfz" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.152565 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 07:07:22 crc kubenswrapper[4842]: E0202 07:07:22.153401 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb013bc6-805e-43d5-95f8-98597c33fa9e" containerName="nova-cell0-conductor-db-sync" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.153424 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb013bc6-805e-43d5-95f8-98597c33fa9e" containerName="nova-cell0-conductor-db-sync" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.153671 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb013bc6-805e-43d5-95f8-98597c33fa9e" containerName="nova-cell0-conductor-db-sync" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.154411 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.156923 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zt7nb" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.157130 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.165788 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.293333 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf5pj\" (UniqueName: \"kubernetes.io/projected/cbda1f81-b862-4ee7-84ce-590c353e4d5b-kube-api-access-zf5pj\") pod \"nova-cell0-conductor-0\" (UID: \"cbda1f81-b862-4ee7-84ce-590c353e4d5b\") " pod="openstack/nova-cell0-conductor-0" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.293472 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbda1f81-b862-4ee7-84ce-590c353e4d5b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cbda1f81-b862-4ee7-84ce-590c353e4d5b\") " pod="openstack/nova-cell0-conductor-0" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.293555 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbda1f81-b862-4ee7-84ce-590c353e4d5b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cbda1f81-b862-4ee7-84ce-590c353e4d5b\") " pod="openstack/nova-cell0-conductor-0" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.396121 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbda1f81-b862-4ee7-84ce-590c353e4d5b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cbda1f81-b862-4ee7-84ce-590c353e4d5b\") " pod="openstack/nova-cell0-conductor-0" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.396282 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbda1f81-b862-4ee7-84ce-590c353e4d5b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cbda1f81-b862-4ee7-84ce-590c353e4d5b\") " pod="openstack/nova-cell0-conductor-0" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.396481 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf5pj\" (UniqueName: \"kubernetes.io/projected/cbda1f81-b862-4ee7-84ce-590c353e4d5b-kube-api-access-zf5pj\") pod \"nova-cell0-conductor-0\" (UID: \"cbda1f81-b862-4ee7-84ce-590c353e4d5b\") " pod="openstack/nova-cell0-conductor-0" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.406556 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbda1f81-b862-4ee7-84ce-590c353e4d5b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cbda1f81-b862-4ee7-84ce-590c353e4d5b\") " pod="openstack/nova-cell0-conductor-0" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.406713 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbda1f81-b862-4ee7-84ce-590c353e4d5b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cbda1f81-b862-4ee7-84ce-590c353e4d5b\") " pod="openstack/nova-cell0-conductor-0" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.428492 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf5pj\" (UniqueName: \"kubernetes.io/projected/cbda1f81-b862-4ee7-84ce-590c353e4d5b-kube-api-access-zf5pj\") pod \"nova-cell0-conductor-0\" (UID: \"cbda1f81-b862-4ee7-84ce-590c353e4d5b\") " pod="openstack/nova-cell0-conductor-0" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.508189 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.833316 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.917618 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 02 07:07:23 crc kubenswrapper[4842]: I0202 07:07:23.000743 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"cbda1f81-b862-4ee7-84ce-590c353e4d5b","Type":"ContainerStarted","Data":"85e914a150668613743c13aeff477024d4b0461bd9157d8138fdfcfd7144ee67"} Feb 02 07:07:24 crc kubenswrapper[4842]: I0202 07:07:24.015475 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"cbda1f81-b862-4ee7-84ce-590c353e4d5b","Type":"ContainerStarted","Data":"75df0dcbbbe53a8b55947d6010ee6f966cc34b098ea07e3b90fcd36b98f46fc4"} Feb 02 07:07:24 crc kubenswrapper[4842]: I0202 07:07:24.018288 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 02 07:07:24 crc kubenswrapper[4842]: I0202 07:07:24.045731 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.045703817 podStartE2EDuration="2.045703817s" podCreationTimestamp="2026-02-02 07:07:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:07:24.03776201 +0000 UTC m=+1269.415029962" watchObservedRunningTime="2026-02-02 07:07:24.045703817 +0000 UTC m=+1269.422971759" Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.068351 4842 generic.go:334] "Generic (PLEG): container finished" podID="804c0232-0b21-4b4a-973e-620fef26b1de" containerID="23dd0ca466edc848ab9f75914f169da25ba7c3c7918e89f13ac53448e128d009" exitCode=137 Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.068414 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"804c0232-0b21-4b4a-973e-620fef26b1de","Type":"ContainerDied","Data":"23dd0ca466edc848ab9f75914f169da25ba7c3c7918e89f13ac53448e128d009"} Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.069287 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"804c0232-0b21-4b4a-973e-620fef26b1de","Type":"ContainerDied","Data":"610ef45c658d7af4f1bfccb5ab1bcf0f7f84312f0fd214a19b9a637d039efaf5"} Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.069309 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="610ef45c658d7af4f1bfccb5ab1bcf0f7f84312f0fd214a19b9a637d039efaf5" Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.075065 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.148057 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-combined-ca-bundle\") pod \"804c0232-0b21-4b4a-973e-620fef26b1de\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.148154 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/804c0232-0b21-4b4a-973e-620fef26b1de-log-httpd\") pod \"804c0232-0b21-4b4a-973e-620fef26b1de\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.148178 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-config-data\") pod \"804c0232-0b21-4b4a-973e-620fef26b1de\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.148260 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp7mr\" (UniqueName: \"kubernetes.io/projected/804c0232-0b21-4b4a-973e-620fef26b1de-kube-api-access-dp7mr\") pod \"804c0232-0b21-4b4a-973e-620fef26b1de\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.148339 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-sg-core-conf-yaml\") pod \"804c0232-0b21-4b4a-973e-620fef26b1de\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.148449 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-scripts\") pod \"804c0232-0b21-4b4a-973e-620fef26b1de\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.148501 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/804c0232-0b21-4b4a-973e-620fef26b1de-run-httpd\") pod \"804c0232-0b21-4b4a-973e-620fef26b1de\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.149928 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/804c0232-0b21-4b4a-973e-620fef26b1de-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "804c0232-0b21-4b4a-973e-620fef26b1de" (UID: "804c0232-0b21-4b4a-973e-620fef26b1de"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.150118 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/804c0232-0b21-4b4a-973e-620fef26b1de-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "804c0232-0b21-4b4a-973e-620fef26b1de" (UID: "804c0232-0b21-4b4a-973e-620fef26b1de"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.156563 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-scripts" (OuterVolumeSpecName: "scripts") pod "804c0232-0b21-4b4a-973e-620fef26b1de" (UID: "804c0232-0b21-4b4a-973e-620fef26b1de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.160627 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/804c0232-0b21-4b4a-973e-620fef26b1de-kube-api-access-dp7mr" (OuterVolumeSpecName: "kube-api-access-dp7mr") pod "804c0232-0b21-4b4a-973e-620fef26b1de" (UID: "804c0232-0b21-4b4a-973e-620fef26b1de"). InnerVolumeSpecName "kube-api-access-dp7mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.210051 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "804c0232-0b21-4b4a-973e-620fef26b1de" (UID: "804c0232-0b21-4b4a-973e-620fef26b1de"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.251528 4842 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/804c0232-0b21-4b4a-973e-620fef26b1de-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.251578 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dp7mr\" (UniqueName: \"kubernetes.io/projected/804c0232-0b21-4b4a-973e-620fef26b1de-kube-api-access-dp7mr\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.251601 4842 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.251620 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.251638 4842 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/804c0232-0b21-4b4a-973e-620fef26b1de-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.263690 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "804c0232-0b21-4b4a-973e-620fef26b1de" (UID: "804c0232-0b21-4b4a-973e-620fef26b1de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.287749 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-config-data" (OuterVolumeSpecName: "config-data") pod "804c0232-0b21-4b4a-973e-620fef26b1de" (UID: "804c0232-0b21-4b4a-973e-620fef26b1de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.353970 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.354011 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.078740 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.104208 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.113937 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.143631 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:07:30 crc kubenswrapper[4842]: E0202 07:07:30.144076 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="proxy-httpd" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.144102 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="proxy-httpd" Feb 02 07:07:30 crc kubenswrapper[4842]: E0202 07:07:30.144122 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="ceilometer-central-agent" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.144132 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="ceilometer-central-agent" Feb 02 07:07:30 crc kubenswrapper[4842]: E0202 07:07:30.144160 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="ceilometer-notification-agent" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.144169 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="ceilometer-notification-agent" Feb 02 07:07:30 crc kubenswrapper[4842]: E0202 07:07:30.144183 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="sg-core" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.144192 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="sg-core" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.144489 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="sg-core" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.144531 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="proxy-httpd" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.144554 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="ceilometer-central-agent" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.144571 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="ceilometer-notification-agent" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.147117 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.151686 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.151876 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.164563 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.272016 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r667c\" (UniqueName: \"kubernetes.io/projected/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-kube-api-access-r667c\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.272080 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-log-httpd\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.272107 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-run-httpd\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.272128 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.272173 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-scripts\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.272193 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.272293 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-config-data\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.373799 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r667c\" (UniqueName: \"kubernetes.io/projected/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-kube-api-access-r667c\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.373890 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-log-httpd\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.373928 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-run-httpd\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.373965 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.374074 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-scripts\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.374889 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.374959 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-run-httpd\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.374982 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-log-httpd\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.375062 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-config-data\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.381526 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-scripts\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.381868 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.389841 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-config-data\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.398384 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.405746 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r667c\" (UniqueName: \"kubernetes.io/projected/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-kube-api-access-r667c\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.476352 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:07:31 crc kubenswrapper[4842]: W0202 07:07:31.014327 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf0e5e43_2690_43bd_8bc5_412e93b15aa7.slice/crio-11a6c57757bd099cc7d5233c6d0b0381d8088a06d822f2cec437e583d985118d WatchSource:0}: Error finding container 11a6c57757bd099cc7d5233c6d0b0381d8088a06d822f2cec437e583d985118d: Status 404 returned error can't find the container with id 11a6c57757bd099cc7d5233c6d0b0381d8088a06d822f2cec437e583d985118d Feb 02 07:07:31 crc kubenswrapper[4842]: I0202 07:07:31.016208 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:07:31 crc kubenswrapper[4842]: I0202 07:07:31.099334 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf0e5e43-2690-43bd-8bc5-412e93b15aa7","Type":"ContainerStarted","Data":"11a6c57757bd099cc7d5233c6d0b0381d8088a06d822f2cec437e583d985118d"} Feb 02 07:07:31 crc kubenswrapper[4842]: I0202 07:07:31.450024 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" path="/var/lib/kubelet/pods/804c0232-0b21-4b4a-973e-620fef26b1de/volumes" Feb 02 07:07:32 crc kubenswrapper[4842]: I0202 07:07:32.112788 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf0e5e43-2690-43bd-8bc5-412e93b15aa7","Type":"ContainerStarted","Data":"dc569d8f3de413d032683c9e0f08d75961dc5c32a972aa6f61cd2c9ca65e212c"} Feb 02 07:07:32 crc kubenswrapper[4842]: I0202 07:07:32.548150 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.069401 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-d648k"] Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.070968 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-d648k" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.075769 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.076049 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.082150 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-d648k"] Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.141737 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4gwk\" (UniqueName: \"kubernetes.io/projected/a1048c2f-1504-465a-b0fb-da368d25f0ff-kube-api-access-t4gwk\") pod \"nova-cell0-cell-mapping-d648k\" (UID: \"a1048c2f-1504-465a-b0fb-da368d25f0ff\") " pod="openstack/nova-cell0-cell-mapping-d648k" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.141836 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-scripts\") pod \"nova-cell0-cell-mapping-d648k\" (UID: \"a1048c2f-1504-465a-b0fb-da368d25f0ff\") " pod="openstack/nova-cell0-cell-mapping-d648k" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.142064 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-d648k\" (UID: \"a1048c2f-1504-465a-b0fb-da368d25f0ff\") " pod="openstack/nova-cell0-cell-mapping-d648k" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.142277 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-config-data\") pod \"nova-cell0-cell-mapping-d648k\" (UID: \"a1048c2f-1504-465a-b0fb-da368d25f0ff\") " pod="openstack/nova-cell0-cell-mapping-d648k" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.172950 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf0e5e43-2690-43bd-8bc5-412e93b15aa7","Type":"ContainerStarted","Data":"c9cbee5e2b6b132dbb12fd1119aa52ef677a82f95da8f0f9cc5627f485065f70"} Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.173009 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf0e5e43-2690-43bd-8bc5-412e93b15aa7","Type":"ContainerStarted","Data":"178309bc38cc30e5625354e994a421729d94b675722d58e99b117553018f4ef3"} Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.244256 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gwk\" (UniqueName: \"kubernetes.io/projected/a1048c2f-1504-465a-b0fb-da368d25f0ff-kube-api-access-t4gwk\") pod \"nova-cell0-cell-mapping-d648k\" (UID: \"a1048c2f-1504-465a-b0fb-da368d25f0ff\") " pod="openstack/nova-cell0-cell-mapping-d648k" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.244329 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-scripts\") pod \"nova-cell0-cell-mapping-d648k\" (UID: \"a1048c2f-1504-465a-b0fb-da368d25f0ff\") " pod="openstack/nova-cell0-cell-mapping-d648k" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.244395 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-d648k\" (UID: \"a1048c2f-1504-465a-b0fb-da368d25f0ff\") " pod="openstack/nova-cell0-cell-mapping-d648k" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.244444 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-config-data\") pod \"nova-cell0-cell-mapping-d648k\" (UID: \"a1048c2f-1504-465a-b0fb-da368d25f0ff\") " pod="openstack/nova-cell0-cell-mapping-d648k" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.250233 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.251428 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.256325 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.264459 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-config-data\") pod \"nova-cell0-cell-mapping-d648k\" (UID: \"a1048c2f-1504-465a-b0fb-da368d25f0ff\") " pod="openstack/nova-cell0-cell-mapping-d648k" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.264832 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-scripts\") pod \"nova-cell0-cell-mapping-d648k\" (UID: \"a1048c2f-1504-465a-b0fb-da368d25f0ff\") " pod="openstack/nova-cell0-cell-mapping-d648k" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.276870 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-d648k\" (UID: \"a1048c2f-1504-465a-b0fb-da368d25f0ff\") " pod="openstack/nova-cell0-cell-mapping-d648k" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.293625 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.309337 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.310959 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.313986 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.319992 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4gwk\" (UniqueName: \"kubernetes.io/projected/a1048c2f-1504-465a-b0fb-da368d25f0ff-kube-api-access-t4gwk\") pod \"nova-cell0-cell-mapping-d648k\" (UID: \"a1048c2f-1504-465a-b0fb-da368d25f0ff\") " pod="openstack/nova-cell0-cell-mapping-d648k" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.346302 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-config-data\") pod \"nova-scheduler-0\" (UID: \"6d440b49-02aa-4a41-9055-8c58b5f9b1f9\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.346376 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkhsq\" (UniqueName: \"kubernetes.io/projected/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-kube-api-access-wkhsq\") pod \"nova-scheduler-0\" (UID: \"6d440b49-02aa-4a41-9055-8c58b5f9b1f9\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.346403 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6d440b49-02aa-4a41-9055-8c58b5f9b1f9\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.349258 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.396397 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.403659 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.407126 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.446422 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-d648k" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.447993 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b930b76-12ee-4261-b822-7fbfe5bcdec7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\") " pod="openstack/nova-api-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.448039 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b930b76-12ee-4261-b822-7fbfe5bcdec7-logs\") pod \"nova-api-0\" (UID: \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\") " pod="openstack/nova-api-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.448096 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-config-data\") pod \"nova-scheduler-0\" (UID: \"6d440b49-02aa-4a41-9055-8c58b5f9b1f9\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.448116 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh6zq\" (UniqueName: \"kubernetes.io/projected/1b930b76-12ee-4261-b822-7fbfe5bcdec7-kube-api-access-zh6zq\") pod \"nova-api-0\" (UID: \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\") " pod="openstack/nova-api-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.448173 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkhsq\" (UniqueName: \"kubernetes.io/projected/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-kube-api-access-wkhsq\") pod \"nova-scheduler-0\" (UID: \"6d440b49-02aa-4a41-9055-8c58b5f9b1f9\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.448197 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6d440b49-02aa-4a41-9055-8c58b5f9b1f9\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.448238 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b930b76-12ee-4261-b822-7fbfe5bcdec7-config-data\") pod \"nova-api-0\" (UID: \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\") " pod="openstack/nova-api-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.471404 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-config-data\") pod \"nova-scheduler-0\" (UID: \"6d440b49-02aa-4a41-9055-8c58b5f9b1f9\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.491189 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.491243 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.492424 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.508469 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.522260 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6d440b49-02aa-4a41-9055-8c58b5f9b1f9\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.522844 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkhsq\" (UniqueName: \"kubernetes.io/projected/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-kube-api-access-wkhsq\") pod \"nova-scheduler-0\" (UID: \"6d440b49-02aa-4a41-9055-8c58b5f9b1f9\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.534903 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.555426 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh6zq\" (UniqueName: \"kubernetes.io/projected/1b930b76-12ee-4261-b822-7fbfe5bcdec7-kube-api-access-zh6zq\") pod \"nova-api-0\" (UID: \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\") " pod="openstack/nova-api-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.555475 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-config-data\") pod \"nova-metadata-0\" (UID: \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\") " pod="openstack/nova-metadata-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.555550 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b930b76-12ee-4261-b822-7fbfe5bcdec7-config-data\") pod \"nova-api-0\" (UID: \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\") " pod="openstack/nova-api-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.555604 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xncht\" (UniqueName: \"kubernetes.io/projected/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-kube-api-access-xncht\") pod \"nova-metadata-0\" (UID: \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\") " pod="openstack/nova-metadata-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.555628 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b930b76-12ee-4261-b822-7fbfe5bcdec7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\") " pod="openstack/nova-api-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.555647 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b930b76-12ee-4261-b822-7fbfe5bcdec7-logs\") pod \"nova-api-0\" (UID: \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\") " pod="openstack/nova-api-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.555661 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-logs\") pod \"nova-metadata-0\" (UID: \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\") " pod="openstack/nova-metadata-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.555687 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\") " pod="openstack/nova-metadata-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.559019 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b930b76-12ee-4261-b822-7fbfe5bcdec7-config-data\") pod \"nova-api-0\" (UID: \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\") " pod="openstack/nova-api-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.559342 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b930b76-12ee-4261-b822-7fbfe5bcdec7-logs\") pod \"nova-api-0\" (UID: \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\") " pod="openstack/nova-api-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.561272 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b930b76-12ee-4261-b822-7fbfe5bcdec7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\") " pod="openstack/nova-api-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.571406 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh6zq\" (UniqueName: \"kubernetes.io/projected/1b930b76-12ee-4261-b822-7fbfe5bcdec7-kube-api-access-zh6zq\") pod \"nova-api-0\" (UID: \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\") " pod="openstack/nova-api-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.587295 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-8rcz9"] Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.589734 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.601925 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-8rcz9"] Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.659936 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-logs\") pod \"nova-metadata-0\" (UID: \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\") " pod="openstack/nova-metadata-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.659989 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\") " pod="openstack/nova-metadata-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.660035 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.660054 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pst98\" (UniqueName: \"kubernetes.io/projected/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-kube-api-access-pst98\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.660073 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-config-data\") pod \"nova-metadata-0\" (UID: \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\") " pod="openstack/nova-metadata-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.660101 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.660157 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.660196 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55tx7\" (UniqueName: \"kubernetes.io/projected/9e447f46-c8cc-42f2-92e6-1465a9f407c6-kube-api-access-55tx7\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.660225 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.660250 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-config\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.660278 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xncht\" (UniqueName: \"kubernetes.io/projected/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-kube-api-access-xncht\") pod \"nova-metadata-0\" (UID: \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\") " pod="openstack/nova-metadata-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.660293 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.660314 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.661498 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-logs\") pod \"nova-metadata-0\" (UID: \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\") " pod="openstack/nova-metadata-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.665231 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\") " pod="openstack/nova-metadata-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.669917 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-config-data\") pod \"nova-metadata-0\" (UID: \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\") " pod="openstack/nova-metadata-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.679897 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xncht\" (UniqueName: \"kubernetes.io/projected/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-kube-api-access-xncht\") pod \"nova-metadata-0\" (UID: \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\") " pod="openstack/nova-metadata-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.712302 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.729153 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.747411 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.763983 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.764024 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pst98\" (UniqueName: \"kubernetes.io/projected/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-kube-api-access-pst98\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.764462 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.764549 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.765086 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55tx7\" (UniqueName: \"kubernetes.io/projected/9e447f46-c8cc-42f2-92e6-1465a9f407c6-kube-api-access-55tx7\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.765111 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.765153 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-config\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.765257 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.765271 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.765289 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.766109 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.769482 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-config\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.769584 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.769699 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.769723 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.772502 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.782480 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pst98\" (UniqueName: \"kubernetes.io/projected/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-kube-api-access-pst98\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.782547 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55tx7\" (UniqueName: \"kubernetes.io/projected/9e447f46-c8cc-42f2-92e6-1465a9f407c6-kube-api-access-55tx7\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.838699 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.915469 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.001854 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-d648k"] Feb 02 07:07:34 crc kubenswrapper[4842]: W0202 07:07:34.020334 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1048c2f_1504_465a_b0fb_da368d25f0ff.slice/crio-317455acddd1ce3bbfb59ec4c92389c4d99285f875b3cfea6fe1f8ad4e3dad33 WatchSource:0}: Error finding container 317455acddd1ce3bbfb59ec4c92389c4d99285f875b3cfea6fe1f8ad4e3dad33: Status 404 returned error can't find the container with id 317455acddd1ce3bbfb59ec4c92389c4d99285f875b3cfea6fe1f8ad4e3dad33 Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.165820 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pnj4n"] Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.167618 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pnj4n" Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.172824 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.173043 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.195149 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pnj4n"] Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.196493 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-d648k" event={"ID":"a1048c2f-1504-465a-b0fb-da368d25f0ff","Type":"ContainerStarted","Data":"317455acddd1ce3bbfb59ec4c92389c4d99285f875b3cfea6fe1f8ad4e3dad33"} Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.275945 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-pnj4n\" (UID: \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\") " pod="openstack/nova-cell1-conductor-db-sync-pnj4n" Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.276040 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-config-data\") pod \"nova-cell1-conductor-db-sync-pnj4n\" (UID: \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\") " pod="openstack/nova-cell1-conductor-db-sync-pnj4n" Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.276143 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-scripts\") pod \"nova-cell1-conductor-db-sync-pnj4n\" (UID: \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\") " pod="openstack/nova-cell1-conductor-db-sync-pnj4n" Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.276195 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7668g\" (UniqueName: \"kubernetes.io/projected/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-kube-api-access-7668g\") pod \"nova-cell1-conductor-db-sync-pnj4n\" (UID: \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\") " pod="openstack/nova-cell1-conductor-db-sync-pnj4n" Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.301364 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.338966 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.342689 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.378489 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-pnj4n\" (UID: \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\") " pod="openstack/nova-cell1-conductor-db-sync-pnj4n" Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.378556 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-config-data\") pod \"nova-cell1-conductor-db-sync-pnj4n\" (UID: \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\") " pod="openstack/nova-cell1-conductor-db-sync-pnj4n" Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.378652 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-scripts\") pod \"nova-cell1-conductor-db-sync-pnj4n\" (UID: \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\") " pod="openstack/nova-cell1-conductor-db-sync-pnj4n" Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.378692 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7668g\" (UniqueName: \"kubernetes.io/projected/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-kube-api-access-7668g\") pod \"nova-cell1-conductor-db-sync-pnj4n\" (UID: \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\") " pod="openstack/nova-cell1-conductor-db-sync-pnj4n" Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.391984 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-scripts\") pod \"nova-cell1-conductor-db-sync-pnj4n\" (UID: \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\") " pod="openstack/nova-cell1-conductor-db-sync-pnj4n" Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.392933 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-config-data\") pod \"nova-cell1-conductor-db-sync-pnj4n\" (UID: \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\") " pod="openstack/nova-cell1-conductor-db-sync-pnj4n" Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.404370 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-pnj4n\" (UID: \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\") " pod="openstack/nova-cell1-conductor-db-sync-pnj4n" Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.412873 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7668g\" (UniqueName: \"kubernetes.io/projected/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-kube-api-access-7668g\") pod \"nova-cell1-conductor-db-sync-pnj4n\" (UID: \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\") " pod="openstack/nova-cell1-conductor-db-sync-pnj4n" Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.496532 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-8rcz9"] Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.503898 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.548707 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pnj4n" Feb 02 07:07:35 crc kubenswrapper[4842]: I0202 07:07:35.005576 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pnj4n"] Feb 02 07:07:35 crc kubenswrapper[4842]: I0202 07:07:35.218440 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6d440b49-02aa-4a41-9055-8c58b5f9b1f9","Type":"ContainerStarted","Data":"9f46e2c0ade54ebb64e6e6a408030ea704892c226f6722e2d58e5f583b4c2039"} Feb 02 07:07:35 crc kubenswrapper[4842]: I0202 07:07:35.223322 4842 generic.go:334] "Generic (PLEG): container finished" podID="9e447f46-c8cc-42f2-92e6-1465a9f407c6" containerID="a176e8b4ea564bc302309fcba58a47b8e68f174edeb83a184476a852cc3c272e" exitCode=0 Feb 02 07:07:35 crc kubenswrapper[4842]: I0202 07:07:35.223392 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" event={"ID":"9e447f46-c8cc-42f2-92e6-1465a9f407c6","Type":"ContainerDied","Data":"a176e8b4ea564bc302309fcba58a47b8e68f174edeb83a184476a852cc3c272e"} Feb 02 07:07:35 crc kubenswrapper[4842]: I0202 07:07:35.223417 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" event={"ID":"9e447f46-c8cc-42f2-92e6-1465a9f407c6","Type":"ContainerStarted","Data":"451377c79842f0376185bd4f8a1618a4b5a16afcc7be3c0724fb62e157fb3755"} Feb 02 07:07:35 crc kubenswrapper[4842]: I0202 07:07:35.230308 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-d648k" event={"ID":"a1048c2f-1504-465a-b0fb-da368d25f0ff","Type":"ContainerStarted","Data":"55d824abd1b5b048d587e61fdc8db2106087cb9113bf5c22c3cc72f341861791"} Feb 02 07:07:35 crc kubenswrapper[4842]: I0202 07:07:35.232095 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"17bc6e8b-33ee-4fee-be1c-2a38b81b6984","Type":"ContainerStarted","Data":"0b0025ccff75b8a427586c74f1235a072bc0cd643e505e2735b58d50091fa295"} Feb 02 07:07:35 crc kubenswrapper[4842]: I0202 07:07:35.233136 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36","Type":"ContainerStarted","Data":"96da2ab68db04d21f4a7c4434a8ff3b113106acfae59f50f9689e724aa76088b"} Feb 02 07:07:35 crc kubenswrapper[4842]: I0202 07:07:35.234039 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b930b76-12ee-4261-b822-7fbfe5bcdec7","Type":"ContainerStarted","Data":"4399ed66cbe5ee83e1b05af70a328b096fc6683212b7ff5ef2c0328dbfd1bfc0"} Feb 02 07:07:35 crc kubenswrapper[4842]: I0202 07:07:35.253536 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf0e5e43-2690-43bd-8bc5-412e93b15aa7","Type":"ContainerStarted","Data":"de54c85c664eebfb9f0ff8f62d6d8f496165521841ce9cb84ff69597b7e01b01"} Feb 02 07:07:35 crc kubenswrapper[4842]: I0202 07:07:35.254378 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 07:07:35 crc kubenswrapper[4842]: I0202 07:07:35.272535 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pnj4n" event={"ID":"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2","Type":"ContainerStarted","Data":"f408d96c1a5dcbacb2299cd3630fe7dab0d27ba0d70de87656f8d0bbabc0a580"} Feb 02 07:07:35 crc kubenswrapper[4842]: I0202 07:07:35.294125 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-d648k" podStartSLOduration=2.294102532 podStartE2EDuration="2.294102532s" podCreationTimestamp="2026-02-02 07:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:07:35.262145269 +0000 UTC m=+1280.639413181" watchObservedRunningTime="2026-02-02 07:07:35.294102532 +0000 UTC m=+1280.671370444" Feb 02 07:07:35 crc kubenswrapper[4842]: I0202 07:07:35.329907 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.5813434709999998 podStartE2EDuration="5.329886959s" podCreationTimestamp="2026-02-02 07:07:30 +0000 UTC" firstStartedPulling="2026-02-02 07:07:31.017910297 +0000 UTC m=+1276.395178249" lastFinishedPulling="2026-02-02 07:07:34.766453805 +0000 UTC m=+1280.143721737" observedRunningTime="2026-02-02 07:07:35.285800206 +0000 UTC m=+1280.663068118" watchObservedRunningTime="2026-02-02 07:07:35.329886959 +0000 UTC m=+1280.707154871" Feb 02 07:07:36 crc kubenswrapper[4842]: I0202 07:07:36.311376 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" event={"ID":"9e447f46-c8cc-42f2-92e6-1465a9f407c6","Type":"ContainerStarted","Data":"5f6dabb3b7c34feb5a2123ac9fa2eb87a3cf03a3caf3efd65fb72c179cb7cd52"} Feb 02 07:07:36 crc kubenswrapper[4842]: I0202 07:07:36.312457 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:36 crc kubenswrapper[4842]: I0202 07:07:36.317313 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pnj4n" event={"ID":"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2","Type":"ContainerStarted","Data":"2d911f330fb7cdc5064800cce65135b706e9f3cc93857bcb38ce5bd51f0bd398"} Feb 02 07:07:36 crc kubenswrapper[4842]: I0202 07:07:36.348201 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" podStartSLOduration=3.348180364 podStartE2EDuration="3.348180364s" podCreationTimestamp="2026-02-02 07:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:07:36.329709805 +0000 UTC m=+1281.706977737" watchObservedRunningTime="2026-02-02 07:07:36.348180364 +0000 UTC m=+1281.725448276" Feb 02 07:07:36 crc kubenswrapper[4842]: I0202 07:07:36.380449 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-pnj4n" podStartSLOduration=2.380430393 podStartE2EDuration="2.380430393s" podCreationTimestamp="2026-02-02 07:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:07:36.344689597 +0000 UTC m=+1281.721957509" watchObservedRunningTime="2026-02-02 07:07:36.380430393 +0000 UTC m=+1281.757698305" Feb 02 07:07:36 crc kubenswrapper[4842]: I0202 07:07:36.912491 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:07:36 crc kubenswrapper[4842]: I0202 07:07:36.942642 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 07:07:39 crc kubenswrapper[4842]: I0202 07:07:39.351625 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36","Type":"ContainerStarted","Data":"3469511ccff43b1ee6fd3291450d98a0112ccaac41021b8b1475c185a2a9fdc7"} Feb 02 07:07:39 crc kubenswrapper[4842]: I0202 07:07:39.352059 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://3469511ccff43b1ee6fd3291450d98a0112ccaac41021b8b1475c185a2a9fdc7" gracePeriod=30 Feb 02 07:07:39 crc kubenswrapper[4842]: I0202 07:07:39.357763 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b930b76-12ee-4261-b822-7fbfe5bcdec7","Type":"ContainerStarted","Data":"c1fc8fa74b4b27c5cf7de3e18e8ae32023df5ef85a2c5c752536859fc8491aea"} Feb 02 07:07:39 crc kubenswrapper[4842]: I0202 07:07:39.357841 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b930b76-12ee-4261-b822-7fbfe5bcdec7","Type":"ContainerStarted","Data":"e559de9abcafad5f9aa8785fa7cef399303f4ad584fe55b639a8918a43693229"} Feb 02 07:07:39 crc kubenswrapper[4842]: I0202 07:07:39.361702 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6d440b49-02aa-4a41-9055-8c58b5f9b1f9","Type":"ContainerStarted","Data":"4e2c9a3c3fa64a744baf07d94d9a86415c44e5fe85bce79da7fd73894b2f5ebb"} Feb 02 07:07:39 crc kubenswrapper[4842]: I0202 07:07:39.367352 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"17bc6e8b-33ee-4fee-be1c-2a38b81b6984","Type":"ContainerStarted","Data":"595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673"} Feb 02 07:07:39 crc kubenswrapper[4842]: I0202 07:07:39.367396 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"17bc6e8b-33ee-4fee-be1c-2a38b81b6984","Type":"ContainerStarted","Data":"09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6"} Feb 02 07:07:39 crc kubenswrapper[4842]: I0202 07:07:39.367683 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="17bc6e8b-33ee-4fee-be1c-2a38b81b6984" containerName="nova-metadata-log" containerID="cri-o://09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6" gracePeriod=30 Feb 02 07:07:39 crc kubenswrapper[4842]: I0202 07:07:39.367709 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="17bc6e8b-33ee-4fee-be1c-2a38b81b6984" containerName="nova-metadata-metadata" containerID="cri-o://595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673" gracePeriod=30 Feb 02 07:07:39 crc kubenswrapper[4842]: I0202 07:07:39.374626 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.637756205 podStartE2EDuration="6.374609753s" podCreationTimestamp="2026-02-02 07:07:33 +0000 UTC" firstStartedPulling="2026-02-02 07:07:34.503256388 +0000 UTC m=+1279.880524300" lastFinishedPulling="2026-02-02 07:07:38.240109936 +0000 UTC m=+1283.617377848" observedRunningTime="2026-02-02 07:07:39.372626643 +0000 UTC m=+1284.749894565" watchObservedRunningTime="2026-02-02 07:07:39.374609753 +0000 UTC m=+1284.751877665" Feb 02 07:07:39 crc kubenswrapper[4842]: I0202 07:07:39.401528 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.509397201 podStartE2EDuration="6.40150408s" podCreationTimestamp="2026-02-02 07:07:33 +0000 UTC" firstStartedPulling="2026-02-02 07:07:34.347983377 +0000 UTC m=+1279.725251289" lastFinishedPulling="2026-02-02 07:07:38.240090266 +0000 UTC m=+1283.617358168" observedRunningTime="2026-02-02 07:07:39.392094236 +0000 UTC m=+1284.769362178" watchObservedRunningTime="2026-02-02 07:07:39.40150408 +0000 UTC m=+1284.778772002" Feb 02 07:07:39 crc kubenswrapper[4842]: I0202 07:07:39.408832 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.517085501 podStartE2EDuration="6.408814951s" podCreationTimestamp="2026-02-02 07:07:33 +0000 UTC" firstStartedPulling="2026-02-02 07:07:34.348433628 +0000 UTC m=+1279.725701540" lastFinishedPulling="2026-02-02 07:07:38.240163078 +0000 UTC m=+1283.617430990" observedRunningTime="2026-02-02 07:07:39.407302373 +0000 UTC m=+1284.784570295" watchObservedRunningTime="2026-02-02 07:07:39.408814951 +0000 UTC m=+1284.786082873" Feb 02 07:07:39 crc kubenswrapper[4842]: I0202 07:07:39.426838 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.491007355 podStartE2EDuration="6.426820868s" podCreationTimestamp="2026-02-02 07:07:33 +0000 UTC" firstStartedPulling="2026-02-02 07:07:34.307261167 +0000 UTC m=+1279.684529079" lastFinishedPulling="2026-02-02 07:07:38.24307466 +0000 UTC m=+1283.620342592" observedRunningTime="2026-02-02 07:07:39.425355571 +0000 UTC m=+1284.802623493" watchObservedRunningTime="2026-02-02 07:07:39.426820868 +0000 UTC m=+1284.804088780" Feb 02 07:07:39 crc kubenswrapper[4842]: I0202 07:07:39.985944 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.119378 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-config-data\") pod \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\" (UID: \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\") " Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.119513 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xncht\" (UniqueName: \"kubernetes.io/projected/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-kube-api-access-xncht\") pod \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\" (UID: \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\") " Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.119542 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-combined-ca-bundle\") pod \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\" (UID: \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\") " Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.119730 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-logs\") pod \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\" (UID: \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\") " Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.120091 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-logs" (OuterVolumeSpecName: "logs") pod "17bc6e8b-33ee-4fee-be1c-2a38b81b6984" (UID: "17bc6e8b-33ee-4fee-be1c-2a38b81b6984"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.125509 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-kube-api-access-xncht" (OuterVolumeSpecName: "kube-api-access-xncht") pod "17bc6e8b-33ee-4fee-be1c-2a38b81b6984" (UID: "17bc6e8b-33ee-4fee-be1c-2a38b81b6984"). InnerVolumeSpecName "kube-api-access-xncht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.146276 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17bc6e8b-33ee-4fee-be1c-2a38b81b6984" (UID: "17bc6e8b-33ee-4fee-be1c-2a38b81b6984"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.153384 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-config-data" (OuterVolumeSpecName: "config-data") pod "17bc6e8b-33ee-4fee-be1c-2a38b81b6984" (UID: "17bc6e8b-33ee-4fee-be1c-2a38b81b6984"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.221632 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.221665 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.221676 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xncht\" (UniqueName: \"kubernetes.io/projected/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-kube-api-access-xncht\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.221687 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.399009 4842 generic.go:334] "Generic (PLEG): container finished" podID="17bc6e8b-33ee-4fee-be1c-2a38b81b6984" containerID="595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673" exitCode=0 Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.399053 4842 generic.go:334] "Generic (PLEG): container finished" podID="17bc6e8b-33ee-4fee-be1c-2a38b81b6984" containerID="09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6" exitCode=143 Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.400607 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.400773 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"17bc6e8b-33ee-4fee-be1c-2a38b81b6984","Type":"ContainerDied","Data":"595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673"} Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.400936 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"17bc6e8b-33ee-4fee-be1c-2a38b81b6984","Type":"ContainerDied","Data":"09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6"} Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.400953 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"17bc6e8b-33ee-4fee-be1c-2a38b81b6984","Type":"ContainerDied","Data":"0b0025ccff75b8a427586c74f1235a072bc0cd643e505e2735b58d50091fa295"} Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.401002 4842 scope.go:117] "RemoveContainer" containerID="595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.424166 4842 scope.go:117] "RemoveContainer" containerID="09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.449476 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.477453 4842 scope.go:117] "RemoveContainer" containerID="595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673" Feb 02 07:07:40 crc kubenswrapper[4842]: E0202 07:07:40.477937 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673\": container with ID starting with 595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673 not found: ID does not exist" containerID="595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.477970 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673"} err="failed to get container status \"595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673\": rpc error: code = NotFound desc = could not find container \"595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673\": container with ID starting with 595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673 not found: ID does not exist" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.477991 4842 scope.go:117] "RemoveContainer" containerID="09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6" Feb 02 07:07:40 crc kubenswrapper[4842]: E0202 07:07:40.479181 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6\": container with ID starting with 09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6 not found: ID does not exist" containerID="09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.479208 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6"} err="failed to get container status \"09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6\": rpc error: code = NotFound desc = could not find container \"09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6\": container with ID starting with 09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6 not found: ID does not exist" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.479241 4842 scope.go:117] "RemoveContainer" containerID="595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.479475 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673"} err="failed to get container status \"595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673\": rpc error: code = NotFound desc = could not find container \"595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673\": container with ID starting with 595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673 not found: ID does not exist" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.479495 4842 scope.go:117] "RemoveContainer" containerID="09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.479539 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.479686 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6"} err="failed to get container status \"09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6\": rpc error: code = NotFound desc = could not find container \"09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6\": container with ID starting with 09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6 not found: ID does not exist" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.490961 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:07:40 crc kubenswrapper[4842]: E0202 07:07:40.491428 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17bc6e8b-33ee-4fee-be1c-2a38b81b6984" containerName="nova-metadata-log" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.491445 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="17bc6e8b-33ee-4fee-be1c-2a38b81b6984" containerName="nova-metadata-log" Feb 02 07:07:40 crc kubenswrapper[4842]: E0202 07:07:40.491483 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17bc6e8b-33ee-4fee-be1c-2a38b81b6984" containerName="nova-metadata-metadata" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.491492 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="17bc6e8b-33ee-4fee-be1c-2a38b81b6984" containerName="nova-metadata-metadata" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.491764 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="17bc6e8b-33ee-4fee-be1c-2a38b81b6984" containerName="nova-metadata-metadata" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.491781 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="17bc6e8b-33ee-4fee-be1c-2a38b81b6984" containerName="nova-metadata-log" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.492926 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.501598 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.501704 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.513775 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.630144 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.630961 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0101d15-442a-47f8-9c48-f9c028c63b8b-logs\") pod \"nova-metadata-0\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.631022 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.631052 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-config-data\") pod \"nova-metadata-0\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.631086 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb5kt\" (UniqueName: \"kubernetes.io/projected/d0101d15-442a-47f8-9c48-f9c028c63b8b-kube-api-access-xb5kt\") pod \"nova-metadata-0\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.732869 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.733003 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0101d15-442a-47f8-9c48-f9c028c63b8b-logs\") pod \"nova-metadata-0\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.733044 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.733067 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-config-data\") pod \"nova-metadata-0\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.733091 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb5kt\" (UniqueName: \"kubernetes.io/projected/d0101d15-442a-47f8-9c48-f9c028c63b8b-kube-api-access-xb5kt\") pod \"nova-metadata-0\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.733846 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0101d15-442a-47f8-9c48-f9c028c63b8b-logs\") pod \"nova-metadata-0\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.741061 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-config-data\") pod \"nova-metadata-0\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.741454 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.741554 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.761381 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb5kt\" (UniqueName: \"kubernetes.io/projected/d0101d15-442a-47f8-9c48-f9c028c63b8b-kube-api-access-xb5kt\") pod \"nova-metadata-0\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.813052 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 07:07:41 crc kubenswrapper[4842]: I0202 07:07:41.314394 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:07:41 crc kubenswrapper[4842]: W0202 07:07:41.316983 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0101d15_442a_47f8_9c48_f9c028c63b8b.slice/crio-616cb3925a3da51c5010240956da0ab2f52a9616e59b12676e8e7128e438074d WatchSource:0}: Error finding container 616cb3925a3da51c5010240956da0ab2f52a9616e59b12676e8e7128e438074d: Status 404 returned error can't find the container with id 616cb3925a3da51c5010240956da0ab2f52a9616e59b12676e8e7128e438074d Feb 02 07:07:41 crc kubenswrapper[4842]: I0202 07:07:41.412177 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0101d15-442a-47f8-9c48-f9c028c63b8b","Type":"ContainerStarted","Data":"616cb3925a3da51c5010240956da0ab2f52a9616e59b12676e8e7128e438074d"} Feb 02 07:07:41 crc kubenswrapper[4842]: I0202 07:07:41.453958 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17bc6e8b-33ee-4fee-be1c-2a38b81b6984" path="/var/lib/kubelet/pods/17bc6e8b-33ee-4fee-be1c-2a38b81b6984/volumes" Feb 02 07:07:42 crc kubenswrapper[4842]: I0202 07:07:42.146498 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:07:42 crc kubenswrapper[4842]: I0202 07:07:42.146900 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:07:42 crc kubenswrapper[4842]: I0202 07:07:42.146968 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 07:07:42 crc kubenswrapper[4842]: I0202 07:07:42.148029 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"edc46ebafd92ce96bdf7451703c0e2c7fef67799fb2195e0085383b856862c49"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 07:07:42 crc kubenswrapper[4842]: I0202 07:07:42.148138 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://edc46ebafd92ce96bdf7451703c0e2c7fef67799fb2195e0085383b856862c49" gracePeriod=600 Feb 02 07:07:42 crc kubenswrapper[4842]: I0202 07:07:42.442009 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="edc46ebafd92ce96bdf7451703c0e2c7fef67799fb2195e0085383b856862c49" exitCode=0 Feb 02 07:07:42 crc kubenswrapper[4842]: I0202 07:07:42.442114 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"edc46ebafd92ce96bdf7451703c0e2c7fef67799fb2195e0085383b856862c49"} Feb 02 07:07:42 crc kubenswrapper[4842]: I0202 07:07:42.443575 4842 scope.go:117] "RemoveContainer" containerID="fb1eaa0cb5ca379afdcc3758df45691293fe02d27ef7a46aa4f4235e0fb79a62" Feb 02 07:07:42 crc kubenswrapper[4842]: I0202 07:07:42.446380 4842 generic.go:334] "Generic (PLEG): container finished" podID="a1048c2f-1504-465a-b0fb-da368d25f0ff" containerID="55d824abd1b5b048d587e61fdc8db2106087cb9113bf5c22c3cc72f341861791" exitCode=0 Feb 02 07:07:42 crc kubenswrapper[4842]: I0202 07:07:42.446453 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-d648k" event={"ID":"a1048c2f-1504-465a-b0fb-da368d25f0ff","Type":"ContainerDied","Data":"55d824abd1b5b048d587e61fdc8db2106087cb9113bf5c22c3cc72f341861791"} Feb 02 07:07:42 crc kubenswrapper[4842]: I0202 07:07:42.449326 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0101d15-442a-47f8-9c48-f9c028c63b8b","Type":"ContainerStarted","Data":"90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5"} Feb 02 07:07:42 crc kubenswrapper[4842]: I0202 07:07:42.449370 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0101d15-442a-47f8-9c48-f9c028c63b8b","Type":"ContainerStarted","Data":"142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f"} Feb 02 07:07:42 crc kubenswrapper[4842]: I0202 07:07:42.489850 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.489831685 podStartE2EDuration="2.489831685s" podCreationTimestamp="2026-02-02 07:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:07:42.482570635 +0000 UTC m=+1287.859838547" watchObservedRunningTime="2026-02-02 07:07:42.489831685 +0000 UTC m=+1287.867099597" Feb 02 07:07:43 crc kubenswrapper[4842]: I0202 07:07:43.464203 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87"} Feb 02 07:07:43 crc kubenswrapper[4842]: I0202 07:07:43.468480 4842 generic.go:334] "Generic (PLEG): container finished" podID="d0854221-b7f1-4e7c-89bc-b9f14d1b29c2" containerID="2d911f330fb7cdc5064800cce65135b706e9f3cc93857bcb38ce5bd51f0bd398" exitCode=0 Feb 02 07:07:43 crc kubenswrapper[4842]: I0202 07:07:43.468603 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pnj4n" event={"ID":"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2","Type":"ContainerDied","Data":"2d911f330fb7cdc5064800cce65135b706e9f3cc93857bcb38ce5bd51f0bd398"} Feb 02 07:07:43 crc kubenswrapper[4842]: I0202 07:07:43.713149 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 07:07:43 crc kubenswrapper[4842]: I0202 07:07:43.713410 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 07:07:43 crc kubenswrapper[4842]: I0202 07:07:43.730282 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 07:07:43 crc kubenswrapper[4842]: I0202 07:07:43.730321 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 07:07:43 crc kubenswrapper[4842]: I0202 07:07:43.766861 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 07:07:43 crc kubenswrapper[4842]: I0202 07:07:43.839647 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:07:43 crc kubenswrapper[4842]: I0202 07:07:43.916486 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:43 crc kubenswrapper[4842]: I0202 07:07:43.954479 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-d648k" Feb 02 07:07:43 crc kubenswrapper[4842]: I0202 07:07:43.975969 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-zwbb4"] Feb 02 07:07:43 crc kubenswrapper[4842]: I0202 07:07:43.978231 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" podUID="0e3c4cab-c86f-4819-8d09-ac45ccb6ea16" containerName="dnsmasq-dns" containerID="cri-o://ded17f227db2c861bcd18849f326f400b19bd42b6b572e71db0154b4815da1cb" gracePeriod=10 Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.113830 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-scripts\") pod \"a1048c2f-1504-465a-b0fb-da368d25f0ff\" (UID: \"a1048c2f-1504-465a-b0fb-da368d25f0ff\") " Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.114153 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-config-data\") pod \"a1048c2f-1504-465a-b0fb-da368d25f0ff\" (UID: \"a1048c2f-1504-465a-b0fb-da368d25f0ff\") " Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.114319 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-combined-ca-bundle\") pod \"a1048c2f-1504-465a-b0fb-da368d25f0ff\" (UID: \"a1048c2f-1504-465a-b0fb-da368d25f0ff\") " Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.114484 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4gwk\" (UniqueName: \"kubernetes.io/projected/a1048c2f-1504-465a-b0fb-da368d25f0ff-kube-api-access-t4gwk\") pod \"a1048c2f-1504-465a-b0fb-da368d25f0ff\" (UID: \"a1048c2f-1504-465a-b0fb-da368d25f0ff\") " Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.136522 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-scripts" (OuterVolumeSpecName: "scripts") pod "a1048c2f-1504-465a-b0fb-da368d25f0ff" (UID: "a1048c2f-1504-465a-b0fb-da368d25f0ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.136563 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1048c2f-1504-465a-b0fb-da368d25f0ff-kube-api-access-t4gwk" (OuterVolumeSpecName: "kube-api-access-t4gwk") pod "a1048c2f-1504-465a-b0fb-da368d25f0ff" (UID: "a1048c2f-1504-465a-b0fb-da368d25f0ff"). InnerVolumeSpecName "kube-api-access-t4gwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.158054 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-config-data" (OuterVolumeSpecName: "config-data") pod "a1048c2f-1504-465a-b0fb-da368d25f0ff" (UID: "a1048c2f-1504-465a-b0fb-da368d25f0ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.164433 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1048c2f-1504-465a-b0fb-da368d25f0ff" (UID: "a1048c2f-1504-465a-b0fb-da368d25f0ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.218371 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.218537 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.218577 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.218627 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4gwk\" (UniqueName: \"kubernetes.io/projected/a1048c2f-1504-465a-b0fb-da368d25f0ff-kube-api-access-t4gwk\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.419670 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.482055 4842 generic.go:334] "Generic (PLEG): container finished" podID="0e3c4cab-c86f-4819-8d09-ac45ccb6ea16" containerID="ded17f227db2c861bcd18849f326f400b19bd42b6b572e71db0154b4815da1cb" exitCode=0 Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.482110 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.482146 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" event={"ID":"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16","Type":"ContainerDied","Data":"ded17f227db2c861bcd18849f326f400b19bd42b6b572e71db0154b4815da1cb"} Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.482205 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" event={"ID":"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16","Type":"ContainerDied","Data":"1e6b63a560dc8cb262f32d7a92ff245402cd7c329b5c9d29fa17e9ebc50d169c"} Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.482258 4842 scope.go:117] "RemoveContainer" containerID="ded17f227db2c861bcd18849f326f400b19bd42b6b572e71db0154b4815da1cb" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.486146 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-d648k" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.486269 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-d648k" event={"ID":"a1048c2f-1504-465a-b0fb-da368d25f0ff","Type":"ContainerDied","Data":"317455acddd1ce3bbfb59ec4c92389c4d99285f875b3cfea6fe1f8ad4e3dad33"} Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.486816 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="317455acddd1ce3bbfb59ec4c92389c4d99285f875b3cfea6fe1f8ad4e3dad33" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.523547 4842 scope.go:117] "RemoveContainer" containerID="69afbd01ab369f9ef7aca7e64e6b27b9c62915c91cb3c8a3caf0848c2efc9775" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.523778 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-config\") pod \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.523874 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbgnn\" (UniqueName: \"kubernetes.io/projected/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-kube-api-access-nbgnn\") pod \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.523915 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-dns-swift-storage-0\") pod \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.523943 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-ovsdbserver-nb\") pod \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.524049 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-ovsdbserver-sb\") pod \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.524081 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-dns-svc\") pod \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.528998 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.544132 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-kube-api-access-nbgnn" (OuterVolumeSpecName: "kube-api-access-nbgnn") pod "0e3c4cab-c86f-4819-8d09-ac45ccb6ea16" (UID: "0e3c4cab-c86f-4819-8d09-ac45ccb6ea16"). InnerVolumeSpecName "kube-api-access-nbgnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.574180 4842 scope.go:117] "RemoveContainer" containerID="ded17f227db2c861bcd18849f326f400b19bd42b6b572e71db0154b4815da1cb" Feb 02 07:07:44 crc kubenswrapper[4842]: E0202 07:07:44.578125 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ded17f227db2c861bcd18849f326f400b19bd42b6b572e71db0154b4815da1cb\": container with ID starting with ded17f227db2c861bcd18849f326f400b19bd42b6b572e71db0154b4815da1cb not found: ID does not exist" containerID="ded17f227db2c861bcd18849f326f400b19bd42b6b572e71db0154b4815da1cb" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.578163 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ded17f227db2c861bcd18849f326f400b19bd42b6b572e71db0154b4815da1cb"} err="failed to get container status \"ded17f227db2c861bcd18849f326f400b19bd42b6b572e71db0154b4815da1cb\": rpc error: code = NotFound desc = could not find container \"ded17f227db2c861bcd18849f326f400b19bd42b6b572e71db0154b4815da1cb\": container with ID starting with ded17f227db2c861bcd18849f326f400b19bd42b6b572e71db0154b4815da1cb not found: ID does not exist" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.578188 4842 scope.go:117] "RemoveContainer" containerID="69afbd01ab369f9ef7aca7e64e6b27b9c62915c91cb3c8a3caf0848c2efc9775" Feb 02 07:07:44 crc kubenswrapper[4842]: E0202 07:07:44.582685 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69afbd01ab369f9ef7aca7e64e6b27b9c62915c91cb3c8a3caf0848c2efc9775\": container with ID starting with 69afbd01ab369f9ef7aca7e64e6b27b9c62915c91cb3c8a3caf0848c2efc9775 not found: ID does not exist" containerID="69afbd01ab369f9ef7aca7e64e6b27b9c62915c91cb3c8a3caf0848c2efc9775" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.582716 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69afbd01ab369f9ef7aca7e64e6b27b9c62915c91cb3c8a3caf0848c2efc9775"} err="failed to get container status \"69afbd01ab369f9ef7aca7e64e6b27b9c62915c91cb3c8a3caf0848c2efc9775\": rpc error: code = NotFound desc = could not find container \"69afbd01ab369f9ef7aca7e64e6b27b9c62915c91cb3c8a3caf0848c2efc9775\": container with ID starting with 69afbd01ab369f9ef7aca7e64e6b27b9c62915c91cb3c8a3caf0848c2efc9775 not found: ID does not exist" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.594238 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.594620 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1b930b76-12ee-4261-b822-7fbfe5bcdec7" containerName="nova-api-log" containerID="cri-o://e559de9abcafad5f9aa8785fa7cef399303f4ad584fe55b639a8918a43693229" gracePeriod=30 Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.594749 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1b930b76-12ee-4261-b822-7fbfe5bcdec7" containerName="nova-api-api" containerID="cri-o://c1fc8fa74b4b27c5cf7de3e18e8ae32023df5ef85a2c5c752536859fc8491aea" gracePeriod=30 Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.605888 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0e3c4cab-c86f-4819-8d09-ac45ccb6ea16" (UID: "0e3c4cab-c86f-4819-8d09-ac45ccb6ea16"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.607998 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1b930b76-12ee-4261-b822-7fbfe5bcdec7" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.187:8774/\": EOF" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.608127 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1b930b76-12ee-4261-b822-7fbfe5bcdec7" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.187:8774/\": EOF" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.610460 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.610689 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d0101d15-442a-47f8-9c48-f9c028c63b8b" containerName="nova-metadata-log" containerID="cri-o://142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f" gracePeriod=30 Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.611073 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d0101d15-442a-47f8-9c48-f9c028c63b8b" containerName="nova-metadata-metadata" containerID="cri-o://90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5" gracePeriod=30 Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.617634 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-config" (OuterVolumeSpecName: "config") pod "0e3c4cab-c86f-4819-8d09-ac45ccb6ea16" (UID: "0e3c4cab-c86f-4819-8d09-ac45ccb6ea16"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.621804 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0e3c4cab-c86f-4819-8d09-ac45ccb6ea16" (UID: "0e3c4cab-c86f-4819-8d09-ac45ccb6ea16"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.622414 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0e3c4cab-c86f-4819-8d09-ac45ccb6ea16" (UID: "0e3c4cab-c86f-4819-8d09-ac45ccb6ea16"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.624393 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.626546 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.626570 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbgnn\" (UniqueName: \"kubernetes.io/projected/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-kube-api-access-nbgnn\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.626579 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.626588 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.626596 4842 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.653364 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0e3c4cab-c86f-4819-8d09-ac45ccb6ea16" (UID: "0e3c4cab-c86f-4819-8d09-ac45ccb6ea16"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.729158 4842 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.851480 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pnj4n" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.875856 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-zwbb4"] Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.887299 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-zwbb4"] Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.935999 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-config-data\") pod \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\" (UID: \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\") " Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.936921 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7668g\" (UniqueName: \"kubernetes.io/projected/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-kube-api-access-7668g\") pod \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\" (UID: \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\") " Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.936948 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-combined-ca-bundle\") pod \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\" (UID: \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\") " Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.937169 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-scripts\") pod \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\" (UID: \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\") " Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.943305 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-scripts" (OuterVolumeSpecName: "scripts") pod "d0854221-b7f1-4e7c-89bc-b9f14d1b29c2" (UID: "d0854221-b7f1-4e7c-89bc-b9f14d1b29c2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.961615 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-kube-api-access-7668g" (OuterVolumeSpecName: "kube-api-access-7668g") pod "d0854221-b7f1-4e7c-89bc-b9f14d1b29c2" (UID: "d0854221-b7f1-4e7c-89bc-b9f14d1b29c2"). InnerVolumeSpecName "kube-api-access-7668g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.964661 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-config-data" (OuterVolumeSpecName: "config-data") pod "d0854221-b7f1-4e7c-89bc-b9f14d1b29c2" (UID: "d0854221-b7f1-4e7c-89bc-b9f14d1b29c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.997322 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0854221-b7f1-4e7c-89bc-b9f14d1b29c2" (UID: "d0854221-b7f1-4e7c-89bc-b9f14d1b29c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.041536 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.041565 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.041575 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7668g\" (UniqueName: \"kubernetes.io/projected/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-kube-api-access-7668g\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.041585 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.207671 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.346450 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0101d15-442a-47f8-9c48-f9c028c63b8b-logs\") pod \"d0101d15-442a-47f8-9c48-f9c028c63b8b\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.346553 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb5kt\" (UniqueName: \"kubernetes.io/projected/d0101d15-442a-47f8-9c48-f9c028c63b8b-kube-api-access-xb5kt\") pod \"d0101d15-442a-47f8-9c48-f9c028c63b8b\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.346606 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-combined-ca-bundle\") pod \"d0101d15-442a-47f8-9c48-f9c028c63b8b\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.346707 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-config-data\") pod \"d0101d15-442a-47f8-9c48-f9c028c63b8b\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.346806 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-nova-metadata-tls-certs\") pod \"d0101d15-442a-47f8-9c48-f9c028c63b8b\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.346919 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0101d15-442a-47f8-9c48-f9c028c63b8b-logs" (OuterVolumeSpecName: "logs") pod "d0101d15-442a-47f8-9c48-f9c028c63b8b" (UID: "d0101d15-442a-47f8-9c48-f9c028c63b8b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.348291 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0101d15-442a-47f8-9c48-f9c028c63b8b-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.350257 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0101d15-442a-47f8-9c48-f9c028c63b8b-kube-api-access-xb5kt" (OuterVolumeSpecName: "kube-api-access-xb5kt") pod "d0101d15-442a-47f8-9c48-f9c028c63b8b" (UID: "d0101d15-442a-47f8-9c48-f9c028c63b8b"). InnerVolumeSpecName "kube-api-access-xb5kt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.383503 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0101d15-442a-47f8-9c48-f9c028c63b8b" (UID: "d0101d15-442a-47f8-9c48-f9c028c63b8b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.384319 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-config-data" (OuterVolumeSpecName: "config-data") pod "d0101d15-442a-47f8-9c48-f9c028c63b8b" (UID: "d0101d15-442a-47f8-9c48-f9c028c63b8b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.406893 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "d0101d15-442a-47f8-9c48-f9c028c63b8b" (UID: "d0101d15-442a-47f8-9c48-f9c028c63b8b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.453527 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.453804 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.453880 4842 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.453949 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb5kt\" (UniqueName: \"kubernetes.io/projected/d0101d15-442a-47f8-9c48-f9c028c63b8b-kube-api-access-xb5kt\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.469811 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e3c4cab-c86f-4819-8d09-ac45ccb6ea16" path="/var/lib/kubelet/pods/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16/volumes" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.498074 4842 generic.go:334] "Generic (PLEG): container finished" podID="d0101d15-442a-47f8-9c48-f9c028c63b8b" containerID="90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5" exitCode=0 Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.498105 4842 generic.go:334] "Generic (PLEG): container finished" podID="d0101d15-442a-47f8-9c48-f9c028c63b8b" containerID="142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f" exitCode=143 Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.498130 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.498148 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0101d15-442a-47f8-9c48-f9c028c63b8b","Type":"ContainerDied","Data":"90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5"} Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.498182 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0101d15-442a-47f8-9c48-f9c028c63b8b","Type":"ContainerDied","Data":"142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f"} Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.498192 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0101d15-442a-47f8-9c48-f9c028c63b8b","Type":"ContainerDied","Data":"616cb3925a3da51c5010240956da0ab2f52a9616e59b12676e8e7128e438074d"} Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.498259 4842 scope.go:117] "RemoveContainer" containerID="90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.500682 4842 generic.go:334] "Generic (PLEG): container finished" podID="1b930b76-12ee-4261-b822-7fbfe5bcdec7" containerID="e559de9abcafad5f9aa8785fa7cef399303f4ad584fe55b639a8918a43693229" exitCode=143 Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.500731 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b930b76-12ee-4261-b822-7fbfe5bcdec7","Type":"ContainerDied","Data":"e559de9abcafad5f9aa8785fa7cef399303f4ad584fe55b639a8918a43693229"} Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.505041 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pnj4n" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.505498 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pnj4n" event={"ID":"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2","Type":"ContainerDied","Data":"f408d96c1a5dcbacb2299cd3630fe7dab0d27ba0d70de87656f8d0bbabc0a580"} Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.505539 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f408d96c1a5dcbacb2299cd3630fe7dab0d27ba0d70de87656f8d0bbabc0a580" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.544620 4842 scope.go:117] "RemoveContainer" containerID="142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.544951 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.557930 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.583066 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:07:45 crc kubenswrapper[4842]: E0202 07:07:45.583497 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1048c2f-1504-465a-b0fb-da368d25f0ff" containerName="nova-manage" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.583513 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1048c2f-1504-465a-b0fb-da368d25f0ff" containerName="nova-manage" Feb 02 07:07:45 crc kubenswrapper[4842]: E0202 07:07:45.583524 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0101d15-442a-47f8-9c48-f9c028c63b8b" containerName="nova-metadata-metadata" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.583531 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0101d15-442a-47f8-9c48-f9c028c63b8b" containerName="nova-metadata-metadata" Feb 02 07:07:45 crc kubenswrapper[4842]: E0202 07:07:45.583547 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e3c4cab-c86f-4819-8d09-ac45ccb6ea16" containerName="init" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.583555 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e3c4cab-c86f-4819-8d09-ac45ccb6ea16" containerName="init" Feb 02 07:07:45 crc kubenswrapper[4842]: E0202 07:07:45.583564 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e3c4cab-c86f-4819-8d09-ac45ccb6ea16" containerName="dnsmasq-dns" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.583569 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e3c4cab-c86f-4819-8d09-ac45ccb6ea16" containerName="dnsmasq-dns" Feb 02 07:07:45 crc kubenswrapper[4842]: E0202 07:07:45.583581 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0854221-b7f1-4e7c-89bc-b9f14d1b29c2" containerName="nova-cell1-conductor-db-sync" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.583587 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0854221-b7f1-4e7c-89bc-b9f14d1b29c2" containerName="nova-cell1-conductor-db-sync" Feb 02 07:07:45 crc kubenswrapper[4842]: E0202 07:07:45.583602 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0101d15-442a-47f8-9c48-f9c028c63b8b" containerName="nova-metadata-log" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.583608 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0101d15-442a-47f8-9c48-f9c028c63b8b" containerName="nova-metadata-log" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.583779 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e3c4cab-c86f-4819-8d09-ac45ccb6ea16" containerName="dnsmasq-dns" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.583794 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0854221-b7f1-4e7c-89bc-b9f14d1b29c2" containerName="nova-cell1-conductor-db-sync" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.583808 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0101d15-442a-47f8-9c48-f9c028c63b8b" containerName="nova-metadata-metadata" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.583832 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0101d15-442a-47f8-9c48-f9c028c63b8b" containerName="nova-metadata-log" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.583845 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1048c2f-1504-465a-b0fb-da368d25f0ff" containerName="nova-manage" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.584724 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.587304 4842 scope.go:117] "RemoveContainer" containerID="90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.587504 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.587556 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 07:07:45 crc kubenswrapper[4842]: E0202 07:07:45.587609 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5\": container with ID starting with 90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5 not found: ID does not exist" containerID="90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.587642 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5"} err="failed to get container status \"90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5\": rpc error: code = NotFound desc = could not find container \"90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5\": container with ID starting with 90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5 not found: ID does not exist" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.587662 4842 scope.go:117] "RemoveContainer" containerID="142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f" Feb 02 07:07:45 crc kubenswrapper[4842]: E0202 07:07:45.587978 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f\": container with ID starting with 142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f not found: ID does not exist" containerID="142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.588001 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f"} err="failed to get container status \"142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f\": rpc error: code = NotFound desc = could not find container \"142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f\": container with ID starting with 142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f not found: ID does not exist" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.588018 4842 scope.go:117] "RemoveContainer" containerID="90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.588265 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5"} err="failed to get container status \"90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5\": rpc error: code = NotFound desc = could not find container \"90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5\": container with ID starting with 90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5 not found: ID does not exist" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.589703 4842 scope.go:117] "RemoveContainer" containerID="142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.590001 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f"} err="failed to get container status \"142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f\": rpc error: code = NotFound desc = could not find container \"142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f\": container with ID starting with 142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f not found: ID does not exist" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.605634 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.619051 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.622730 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.625654 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.640018 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.760551 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4850512e-bbc8-468d-94ef-1d1be3b0b49c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4850512e-bbc8-468d-94ef-1d1be3b0b49c\") " pod="openstack/nova-cell1-conductor-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.760622 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58tm7\" (UniqueName: \"kubernetes.io/projected/4850512e-bbc8-468d-94ef-1d1be3b0b49c-kube-api-access-58tm7\") pod \"nova-cell1-conductor-0\" (UID: \"4850512e-bbc8-468d-94ef-1d1be3b0b49c\") " pod="openstack/nova-cell1-conductor-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.760650 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.760720 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4850512e-bbc8-468d-94ef-1d1be3b0b49c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4850512e-bbc8-468d-94ef-1d1be3b0b49c\") " pod="openstack/nova-cell1-conductor-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.760764 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xwxg\" (UniqueName: \"kubernetes.io/projected/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-kube-api-access-8xwxg\") pod \"nova-metadata-0\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.760783 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.760801 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-logs\") pod \"nova-metadata-0\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.760821 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-config-data\") pod \"nova-metadata-0\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.868438 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4850512e-bbc8-468d-94ef-1d1be3b0b49c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4850512e-bbc8-468d-94ef-1d1be3b0b49c\") " pod="openstack/nova-cell1-conductor-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.868505 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xwxg\" (UniqueName: \"kubernetes.io/projected/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-kube-api-access-8xwxg\") pod \"nova-metadata-0\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.868526 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.868544 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-logs\") pod \"nova-metadata-0\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.868565 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-config-data\") pod \"nova-metadata-0\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.868594 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4850512e-bbc8-468d-94ef-1d1be3b0b49c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4850512e-bbc8-468d-94ef-1d1be3b0b49c\") " pod="openstack/nova-cell1-conductor-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.868628 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58tm7\" (UniqueName: \"kubernetes.io/projected/4850512e-bbc8-468d-94ef-1d1be3b0b49c-kube-api-access-58tm7\") pod \"nova-cell1-conductor-0\" (UID: \"4850512e-bbc8-468d-94ef-1d1be3b0b49c\") " pod="openstack/nova-cell1-conductor-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.868651 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.872527 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-logs\") pod \"nova-metadata-0\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.874555 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.874651 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-config-data\") pod \"nova-metadata-0\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.876317 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4850512e-bbc8-468d-94ef-1d1be3b0b49c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4850512e-bbc8-468d-94ef-1d1be3b0b49c\") " pod="openstack/nova-cell1-conductor-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.880327 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.884991 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4850512e-bbc8-468d-94ef-1d1be3b0b49c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4850512e-bbc8-468d-94ef-1d1be3b0b49c\") " pod="openstack/nova-cell1-conductor-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.903867 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58tm7\" (UniqueName: \"kubernetes.io/projected/4850512e-bbc8-468d-94ef-1d1be3b0b49c-kube-api-access-58tm7\") pod \"nova-cell1-conductor-0\" (UID: \"4850512e-bbc8-468d-94ef-1d1be3b0b49c\") " pod="openstack/nova-cell1-conductor-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.906409 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xwxg\" (UniqueName: \"kubernetes.io/projected/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-kube-api-access-8xwxg\") pod \"nova-metadata-0\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.932382 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.942849 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 02 07:07:46 crc kubenswrapper[4842]: I0202 07:07:46.424922 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 07:07:46 crc kubenswrapper[4842]: I0202 07:07:46.518592 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4850512e-bbc8-468d-94ef-1d1be3b0b49c","Type":"ContainerStarted","Data":"f8175b6df5dfbdeb4f2b96118c96bb8462df0286a53b3bdcaea8cf46054c0053"} Feb 02 07:07:46 crc kubenswrapper[4842]: I0202 07:07:46.520725 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:07:46 crc kubenswrapper[4842]: I0202 07:07:46.518664 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="6d440b49-02aa-4a41-9055-8c58b5f9b1f9" containerName="nova-scheduler-scheduler" containerID="cri-o://4e2c9a3c3fa64a744baf07d94d9a86415c44e5fe85bce79da7fd73894b2f5ebb" gracePeriod=30 Feb 02 07:07:47 crc kubenswrapper[4842]: I0202 07:07:47.448857 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0101d15-442a-47f8-9c48-f9c028c63b8b" path="/var/lib/kubelet/pods/d0101d15-442a-47f8-9c48-f9c028c63b8b/volumes" Feb 02 07:07:47 crc kubenswrapper[4842]: I0202 07:07:47.528777 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec1cba88-8c9f-48bb-91fc-fc7675bba29a","Type":"ContainerStarted","Data":"582a5dd3542b08360b5bb369e0ddd50ae9403ee0b66668c8d7e065b109baa6aa"} Feb 02 07:07:47 crc kubenswrapper[4842]: I0202 07:07:47.528855 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec1cba88-8c9f-48bb-91fc-fc7675bba29a","Type":"ContainerStarted","Data":"e9568e435718a90b20e25e9432be05f2885e29c1c8378fa536932ac94aabd5f1"} Feb 02 07:07:47 crc kubenswrapper[4842]: I0202 07:07:47.528873 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec1cba88-8c9f-48bb-91fc-fc7675bba29a","Type":"ContainerStarted","Data":"a1edffd6229fcfd445e770ea5551a81134a2ceed05cbf411c15f38de72a6bfa9"} Feb 02 07:07:47 crc kubenswrapper[4842]: I0202 07:07:47.532820 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4850512e-bbc8-468d-94ef-1d1be3b0b49c","Type":"ContainerStarted","Data":"b02a597eaa6f312a54cab57cb22a7ba5718d1a52db99c582f4e0031ffecbffc2"} Feb 02 07:07:47 crc kubenswrapper[4842]: I0202 07:07:47.533059 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 02 07:07:47 crc kubenswrapper[4842]: I0202 07:07:47.565459 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.565432606 podStartE2EDuration="2.565432606s" podCreationTimestamp="2026-02-02 07:07:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:07:47.552044334 +0000 UTC m=+1292.929312256" watchObservedRunningTime="2026-02-02 07:07:47.565432606 +0000 UTC m=+1292.942700538" Feb 02 07:07:47 crc kubenswrapper[4842]: I0202 07:07:47.591269 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.5912473560000002 podStartE2EDuration="2.591247356s" podCreationTimestamp="2026-02-02 07:07:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:07:47.576576022 +0000 UTC m=+1292.953843934" watchObservedRunningTime="2026-02-02 07:07:47.591247356 +0000 UTC m=+1292.968515268" Feb 02 07:07:48 crc kubenswrapper[4842]: E0202 07:07:48.716528 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4e2c9a3c3fa64a744baf07d94d9a86415c44e5fe85bce79da7fd73894b2f5ebb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 07:07:48 crc kubenswrapper[4842]: E0202 07:07:48.719432 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4e2c9a3c3fa64a744baf07d94d9a86415c44e5fe85bce79da7fd73894b2f5ebb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 07:07:48 crc kubenswrapper[4842]: E0202 07:07:48.721812 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4e2c9a3c3fa64a744baf07d94d9a86415c44e5fe85bce79da7fd73894b2f5ebb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 07:07:48 crc kubenswrapper[4842]: E0202 07:07:48.721988 4842 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="6d440b49-02aa-4a41-9055-8c58b5f9b1f9" containerName="nova-scheduler-scheduler" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.468113 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.567794 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-config-data\") pod \"6d440b49-02aa-4a41-9055-8c58b5f9b1f9\" (UID: \"6d440b49-02aa-4a41-9055-8c58b5f9b1f9\") " Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.568024 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkhsq\" (UniqueName: \"kubernetes.io/projected/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-kube-api-access-wkhsq\") pod \"6d440b49-02aa-4a41-9055-8c58b5f9b1f9\" (UID: \"6d440b49-02aa-4a41-9055-8c58b5f9b1f9\") " Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.568066 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-combined-ca-bundle\") pod \"6d440b49-02aa-4a41-9055-8c58b5f9b1f9\" (UID: \"6d440b49-02aa-4a41-9055-8c58b5f9b1f9\") " Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.569772 4842 generic.go:334] "Generic (PLEG): container finished" podID="6d440b49-02aa-4a41-9055-8c58b5f9b1f9" containerID="4e2c9a3c3fa64a744baf07d94d9a86415c44e5fe85bce79da7fd73894b2f5ebb" exitCode=0 Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.569824 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6d440b49-02aa-4a41-9055-8c58b5f9b1f9","Type":"ContainerDied","Data":"4e2c9a3c3fa64a744baf07d94d9a86415c44e5fe85bce79da7fd73894b2f5ebb"} Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.569847 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6d440b49-02aa-4a41-9055-8c58b5f9b1f9","Type":"ContainerDied","Data":"9f46e2c0ade54ebb64e6e6a408030ea704892c226f6722e2d58e5f583b4c2039"} Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.569864 4842 scope.go:117] "RemoveContainer" containerID="4e2c9a3c3fa64a744baf07d94d9a86415c44e5fe85bce79da7fd73894b2f5ebb" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.569999 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.570788 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.572648 4842 generic.go:334] "Generic (PLEG): container finished" podID="1b930b76-12ee-4261-b822-7fbfe5bcdec7" containerID="c1fc8fa74b4b27c5cf7de3e18e8ae32023df5ef85a2c5c752536859fc8491aea" exitCode=0 Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.572668 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b930b76-12ee-4261-b822-7fbfe5bcdec7","Type":"ContainerDied","Data":"c1fc8fa74b4b27c5cf7de3e18e8ae32023df5ef85a2c5c752536859fc8491aea"} Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.572683 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b930b76-12ee-4261-b822-7fbfe5bcdec7","Type":"ContainerDied","Data":"4399ed66cbe5ee83e1b05af70a328b096fc6683212b7ff5ef2c0328dbfd1bfc0"} Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.575004 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-kube-api-access-wkhsq" (OuterVolumeSpecName: "kube-api-access-wkhsq") pod "6d440b49-02aa-4a41-9055-8c58b5f9b1f9" (UID: "6d440b49-02aa-4a41-9055-8c58b5f9b1f9"). InnerVolumeSpecName "kube-api-access-wkhsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.595567 4842 scope.go:117] "RemoveContainer" containerID="4e2c9a3c3fa64a744baf07d94d9a86415c44e5fe85bce79da7fd73894b2f5ebb" Feb 02 07:07:50 crc kubenswrapper[4842]: E0202 07:07:50.596554 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e2c9a3c3fa64a744baf07d94d9a86415c44e5fe85bce79da7fd73894b2f5ebb\": container with ID starting with 4e2c9a3c3fa64a744baf07d94d9a86415c44e5fe85bce79da7fd73894b2f5ebb not found: ID does not exist" containerID="4e2c9a3c3fa64a744baf07d94d9a86415c44e5fe85bce79da7fd73894b2f5ebb" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.596610 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e2c9a3c3fa64a744baf07d94d9a86415c44e5fe85bce79da7fd73894b2f5ebb"} err="failed to get container status \"4e2c9a3c3fa64a744baf07d94d9a86415c44e5fe85bce79da7fd73894b2f5ebb\": rpc error: code = NotFound desc = could not find container \"4e2c9a3c3fa64a744baf07d94d9a86415c44e5fe85bce79da7fd73894b2f5ebb\": container with ID starting with 4e2c9a3c3fa64a744baf07d94d9a86415c44e5fe85bce79da7fd73894b2f5ebb not found: ID does not exist" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.596651 4842 scope.go:117] "RemoveContainer" containerID="c1fc8fa74b4b27c5cf7de3e18e8ae32023df5ef85a2c5c752536859fc8491aea" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.603146 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-config-data" (OuterVolumeSpecName: "config-data") pod "6d440b49-02aa-4a41-9055-8c58b5f9b1f9" (UID: "6d440b49-02aa-4a41-9055-8c58b5f9b1f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.610958 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d440b49-02aa-4a41-9055-8c58b5f9b1f9" (UID: "6d440b49-02aa-4a41-9055-8c58b5f9b1f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.617803 4842 scope.go:117] "RemoveContainer" containerID="e559de9abcafad5f9aa8785fa7cef399303f4ad584fe55b639a8918a43693229" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.637392 4842 scope.go:117] "RemoveContainer" containerID="c1fc8fa74b4b27c5cf7de3e18e8ae32023df5ef85a2c5c752536859fc8491aea" Feb 02 07:07:50 crc kubenswrapper[4842]: E0202 07:07:50.637774 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1fc8fa74b4b27c5cf7de3e18e8ae32023df5ef85a2c5c752536859fc8491aea\": container with ID starting with c1fc8fa74b4b27c5cf7de3e18e8ae32023df5ef85a2c5c752536859fc8491aea not found: ID does not exist" containerID="c1fc8fa74b4b27c5cf7de3e18e8ae32023df5ef85a2c5c752536859fc8491aea" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.637801 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1fc8fa74b4b27c5cf7de3e18e8ae32023df5ef85a2c5c752536859fc8491aea"} err="failed to get container status \"c1fc8fa74b4b27c5cf7de3e18e8ae32023df5ef85a2c5c752536859fc8491aea\": rpc error: code = NotFound desc = could not find container \"c1fc8fa74b4b27c5cf7de3e18e8ae32023df5ef85a2c5c752536859fc8491aea\": container with ID starting with c1fc8fa74b4b27c5cf7de3e18e8ae32023df5ef85a2c5c752536859fc8491aea not found: ID does not exist" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.637821 4842 scope.go:117] "RemoveContainer" containerID="e559de9abcafad5f9aa8785fa7cef399303f4ad584fe55b639a8918a43693229" Feb 02 07:07:50 crc kubenswrapper[4842]: E0202 07:07:50.638912 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e559de9abcafad5f9aa8785fa7cef399303f4ad584fe55b639a8918a43693229\": container with ID starting with e559de9abcafad5f9aa8785fa7cef399303f4ad584fe55b639a8918a43693229 not found: ID does not exist" containerID="e559de9abcafad5f9aa8785fa7cef399303f4ad584fe55b639a8918a43693229" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.638976 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e559de9abcafad5f9aa8785fa7cef399303f4ad584fe55b639a8918a43693229"} err="failed to get container status \"e559de9abcafad5f9aa8785fa7cef399303f4ad584fe55b639a8918a43693229\": rpc error: code = NotFound desc = could not find container \"e559de9abcafad5f9aa8785fa7cef399303f4ad584fe55b639a8918a43693229\": container with ID starting with e559de9abcafad5f9aa8785fa7cef399303f4ad584fe55b639a8918a43693229 not found: ID does not exist" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.670134 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b930b76-12ee-4261-b822-7fbfe5bcdec7-combined-ca-bundle\") pod \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\" (UID: \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\") " Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.670279 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh6zq\" (UniqueName: \"kubernetes.io/projected/1b930b76-12ee-4261-b822-7fbfe5bcdec7-kube-api-access-zh6zq\") pod \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\" (UID: \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\") " Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.670349 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b930b76-12ee-4261-b822-7fbfe5bcdec7-config-data\") pod \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\" (UID: \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\") " Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.670375 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b930b76-12ee-4261-b822-7fbfe5bcdec7-logs\") pod \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\" (UID: \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\") " Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.670776 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.670793 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.670804 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkhsq\" (UniqueName: \"kubernetes.io/projected/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-kube-api-access-wkhsq\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.671198 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b930b76-12ee-4261-b822-7fbfe5bcdec7-logs" (OuterVolumeSpecName: "logs") pod "1b930b76-12ee-4261-b822-7fbfe5bcdec7" (UID: "1b930b76-12ee-4261-b822-7fbfe5bcdec7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.674831 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b930b76-12ee-4261-b822-7fbfe5bcdec7-kube-api-access-zh6zq" (OuterVolumeSpecName: "kube-api-access-zh6zq") pod "1b930b76-12ee-4261-b822-7fbfe5bcdec7" (UID: "1b930b76-12ee-4261-b822-7fbfe5bcdec7"). InnerVolumeSpecName "kube-api-access-zh6zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.696112 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b930b76-12ee-4261-b822-7fbfe5bcdec7-config-data" (OuterVolumeSpecName: "config-data") pod "1b930b76-12ee-4261-b822-7fbfe5bcdec7" (UID: "1b930b76-12ee-4261-b822-7fbfe5bcdec7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.696923 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b930b76-12ee-4261-b822-7fbfe5bcdec7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b930b76-12ee-4261-b822-7fbfe5bcdec7" (UID: "1b930b76-12ee-4261-b822-7fbfe5bcdec7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.772307 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b930b76-12ee-4261-b822-7fbfe5bcdec7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.772350 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh6zq\" (UniqueName: \"kubernetes.io/projected/1b930b76-12ee-4261-b822-7fbfe5bcdec7-kube-api-access-zh6zq\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.772366 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b930b76-12ee-4261-b822-7fbfe5bcdec7-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.772380 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b930b76-12ee-4261-b822-7fbfe5bcdec7-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.900228 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.908831 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.923686 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:07:50 crc kubenswrapper[4842]: E0202 07:07:50.924119 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b930b76-12ee-4261-b822-7fbfe5bcdec7" containerName="nova-api-api" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.924138 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b930b76-12ee-4261-b822-7fbfe5bcdec7" containerName="nova-api-api" Feb 02 07:07:50 crc kubenswrapper[4842]: E0202 07:07:50.924157 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d440b49-02aa-4a41-9055-8c58b5f9b1f9" containerName="nova-scheduler-scheduler" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.924166 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d440b49-02aa-4a41-9055-8c58b5f9b1f9" containerName="nova-scheduler-scheduler" Feb 02 07:07:50 crc kubenswrapper[4842]: E0202 07:07:50.924179 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b930b76-12ee-4261-b822-7fbfe5bcdec7" containerName="nova-api-log" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.924190 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b930b76-12ee-4261-b822-7fbfe5bcdec7" containerName="nova-api-log" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.924428 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b930b76-12ee-4261-b822-7fbfe5bcdec7" containerName="nova-api-api" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.924461 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b930b76-12ee-4261-b822-7fbfe5bcdec7" containerName="nova-api-log" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.924477 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d440b49-02aa-4a41-9055-8c58b5f9b1f9" containerName="nova-scheduler-scheduler" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.925112 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.927331 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.932705 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.932789 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.938178 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.005825 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ba09a5-eecd-46b6-9182-96444c6de570-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"46ba09a5-eecd-46b6-9182-96444c6de570\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.005904 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46ba09a5-eecd-46b6-9182-96444c6de570-config-data\") pod \"nova-scheduler-0\" (UID: \"46ba09a5-eecd-46b6-9182-96444c6de570\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.006236 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtj28\" (UniqueName: \"kubernetes.io/projected/46ba09a5-eecd-46b6-9182-96444c6de570-kube-api-access-jtj28\") pod \"nova-scheduler-0\" (UID: \"46ba09a5-eecd-46b6-9182-96444c6de570\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.108036 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtj28\" (UniqueName: \"kubernetes.io/projected/46ba09a5-eecd-46b6-9182-96444c6de570-kube-api-access-jtj28\") pod \"nova-scheduler-0\" (UID: \"46ba09a5-eecd-46b6-9182-96444c6de570\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.108954 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ba09a5-eecd-46b6-9182-96444c6de570-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"46ba09a5-eecd-46b6-9182-96444c6de570\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.109062 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46ba09a5-eecd-46b6-9182-96444c6de570-config-data\") pod \"nova-scheduler-0\" (UID: \"46ba09a5-eecd-46b6-9182-96444c6de570\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.113783 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ba09a5-eecd-46b6-9182-96444c6de570-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"46ba09a5-eecd-46b6-9182-96444c6de570\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.113896 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46ba09a5-eecd-46b6-9182-96444c6de570-config-data\") pod \"nova-scheduler-0\" (UID: \"46ba09a5-eecd-46b6-9182-96444c6de570\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.125934 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtj28\" (UniqueName: \"kubernetes.io/projected/46ba09a5-eecd-46b6-9182-96444c6de570-kube-api-access-jtj28\") pod \"nova-scheduler-0\" (UID: \"46ba09a5-eecd-46b6-9182-96444c6de570\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.300513 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.446772 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d440b49-02aa-4a41-9055-8c58b5f9b1f9" path="/var/lib/kubelet/pods/6d440b49-02aa-4a41-9055-8c58b5f9b1f9/volumes" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.582345 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.605681 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.630412 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.638498 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.640581 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.643292 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.649352 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.731672 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\") " pod="openstack/nova-api-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.731741 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-logs\") pod \"nova-api-0\" (UID: \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\") " pod="openstack/nova-api-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.731774 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-config-data\") pod \"nova-api-0\" (UID: \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\") " pod="openstack/nova-api-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.731803 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gngbc\" (UniqueName: \"kubernetes.io/projected/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-kube-api-access-gngbc\") pod \"nova-api-0\" (UID: \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\") " pod="openstack/nova-api-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.793613 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:07:51 crc kubenswrapper[4842]: W0202 07:07:51.802053 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46ba09a5_eecd_46b6_9182_96444c6de570.slice/crio-968efa1fb3cd3082b0218178700a10a30e92c9574cb73ef9bff028ccdf092975 WatchSource:0}: Error finding container 968efa1fb3cd3082b0218178700a10a30e92c9574cb73ef9bff028ccdf092975: Status 404 returned error can't find the container with id 968efa1fb3cd3082b0218178700a10a30e92c9574cb73ef9bff028ccdf092975 Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.833922 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\") " pod="openstack/nova-api-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.834014 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-logs\") pod \"nova-api-0\" (UID: \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\") " pod="openstack/nova-api-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.834063 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-config-data\") pod \"nova-api-0\" (UID: \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\") " pod="openstack/nova-api-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.834107 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gngbc\" (UniqueName: \"kubernetes.io/projected/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-kube-api-access-gngbc\") pod \"nova-api-0\" (UID: \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\") " pod="openstack/nova-api-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.834852 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-logs\") pod \"nova-api-0\" (UID: \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\") " pod="openstack/nova-api-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.839900 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-config-data\") pod \"nova-api-0\" (UID: \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\") " pod="openstack/nova-api-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.840807 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\") " pod="openstack/nova-api-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.871172 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gngbc\" (UniqueName: \"kubernetes.io/projected/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-kube-api-access-gngbc\") pod \"nova-api-0\" (UID: \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\") " pod="openstack/nova-api-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.964998 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 07:07:52 crc kubenswrapper[4842]: I0202 07:07:52.577075 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:07:52 crc kubenswrapper[4842]: W0202 07:07:52.577799 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc80be6c0_a1f6_43d6_ba9d_9affaf8daff2.slice/crio-1675d09f9cfa207274c23b46f1678c5e2c1bb07719525781e0d993852dd0e316 WatchSource:0}: Error finding container 1675d09f9cfa207274c23b46f1678c5e2c1bb07719525781e0d993852dd0e316: Status 404 returned error can't find the container with id 1675d09f9cfa207274c23b46f1678c5e2c1bb07719525781e0d993852dd0e316 Feb 02 07:07:52 crc kubenswrapper[4842]: I0202 07:07:52.609068 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"46ba09a5-eecd-46b6-9182-96444c6de570","Type":"ContainerStarted","Data":"fafeb3817a31a7a0fb62f345433970bfd99201eb46a5c80f3211d7f7e964cd2c"} Feb 02 07:07:52 crc kubenswrapper[4842]: I0202 07:07:52.609118 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"46ba09a5-eecd-46b6-9182-96444c6de570","Type":"ContainerStarted","Data":"968efa1fb3cd3082b0218178700a10a30e92c9574cb73ef9bff028ccdf092975"} Feb 02 07:07:52 crc kubenswrapper[4842]: I0202 07:07:52.616445 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2","Type":"ContainerStarted","Data":"1675d09f9cfa207274c23b46f1678c5e2c1bb07719525781e0d993852dd0e316"} Feb 02 07:07:53 crc kubenswrapper[4842]: I0202 07:07:52.635416 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.6353968180000003 podStartE2EDuration="2.635396818s" podCreationTimestamp="2026-02-02 07:07:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:07:52.6257974 +0000 UTC m=+1298.003065322" watchObservedRunningTime="2026-02-02 07:07:52.635396818 +0000 UTC m=+1298.012664730" Feb 02 07:07:53 crc kubenswrapper[4842]: I0202 07:07:53.456704 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b930b76-12ee-4261-b822-7fbfe5bcdec7" path="/var/lib/kubelet/pods/1b930b76-12ee-4261-b822-7fbfe5bcdec7/volumes" Feb 02 07:07:53 crc kubenswrapper[4842]: I0202 07:07:53.629743 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2","Type":"ContainerStarted","Data":"3fb1e025904b8d9ff9892132492b878acb177e84b913bbf189ea1d283f0d92c1"} Feb 02 07:07:53 crc kubenswrapper[4842]: I0202 07:07:53.629826 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2","Type":"ContainerStarted","Data":"04b4da4c7cdb199c83e91cbd927bc8dcd576a40d0a7ecd072203710a818e10c5"} Feb 02 07:07:53 crc kubenswrapper[4842]: I0202 07:07:53.670602 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.670575351 podStartE2EDuration="2.670575351s" podCreationTimestamp="2026-02-02 07:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:07:53.657295462 +0000 UTC m=+1299.034563414" watchObservedRunningTime="2026-02-02 07:07:53.670575351 +0000 UTC m=+1299.047843303" Feb 02 07:07:55 crc kubenswrapper[4842]: I0202 07:07:55.932737 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 07:07:55 crc kubenswrapper[4842]: I0202 07:07:55.933304 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 07:07:56 crc kubenswrapper[4842]: I0202 07:07:56.006113 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 02 07:07:56 crc kubenswrapper[4842]: I0202 07:07:56.301438 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 07:07:56 crc kubenswrapper[4842]: I0202 07:07:56.954394 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ec1cba88-8c9f-48bb-91fc-fc7675bba29a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 07:07:56 crc kubenswrapper[4842]: I0202 07:07:56.954968 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ec1cba88-8c9f-48bb-91fc-fc7675bba29a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 07:08:00 crc kubenswrapper[4842]: I0202 07:08:00.489350 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 07:08:01 crc kubenswrapper[4842]: I0202 07:08:01.300757 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 07:08:01 crc kubenswrapper[4842]: I0202 07:08:01.342810 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 07:08:01 crc kubenswrapper[4842]: I0202 07:08:01.779950 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 07:08:01 crc kubenswrapper[4842]: I0202 07:08:01.966195 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 07:08:01 crc kubenswrapper[4842]: I0202 07:08:01.966286 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 07:08:03 crc kubenswrapper[4842]: I0202 07:08:03.048425 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 07:08:03 crc kubenswrapper[4842]: I0202 07:08:03.048440 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 07:08:04 crc kubenswrapper[4842]: I0202 07:08:04.343437 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 07:08:04 crc kubenswrapper[4842]: I0202 07:08:04.343883 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="0d9bebc9-9e67-4019-bdf8-22e78dfc3d14" containerName="kube-state-metrics" containerID="cri-o://7ef2e70ff07365f726387024ecff0fabe2cd2d02cae00c3b439c9a6c10f2e47d" gracePeriod=30 Feb 02 07:08:04 crc kubenswrapper[4842]: I0202 07:08:04.764007 4842 generic.go:334] "Generic (PLEG): container finished" podID="0d9bebc9-9e67-4019-bdf8-22e78dfc3d14" containerID="7ef2e70ff07365f726387024ecff0fabe2cd2d02cae00c3b439c9a6c10f2e47d" exitCode=2 Feb 02 07:08:04 crc kubenswrapper[4842]: I0202 07:08:04.764094 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0d9bebc9-9e67-4019-bdf8-22e78dfc3d14","Type":"ContainerDied","Data":"7ef2e70ff07365f726387024ecff0fabe2cd2d02cae00c3b439c9a6c10f2e47d"} Feb 02 07:08:04 crc kubenswrapper[4842]: I0202 07:08:04.884528 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.059320 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vmlv\" (UniqueName: \"kubernetes.io/projected/0d9bebc9-9e67-4019-bdf8-22e78dfc3d14-kube-api-access-2vmlv\") pod \"0d9bebc9-9e67-4019-bdf8-22e78dfc3d14\" (UID: \"0d9bebc9-9e67-4019-bdf8-22e78dfc3d14\") " Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.068885 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d9bebc9-9e67-4019-bdf8-22e78dfc3d14-kube-api-access-2vmlv" (OuterVolumeSpecName: "kube-api-access-2vmlv") pod "0d9bebc9-9e67-4019-bdf8-22e78dfc3d14" (UID: "0d9bebc9-9e67-4019-bdf8-22e78dfc3d14"). InnerVolumeSpecName "kube-api-access-2vmlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.161619 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vmlv\" (UniqueName: \"kubernetes.io/projected/0d9bebc9-9e67-4019-bdf8-22e78dfc3d14-kube-api-access-2vmlv\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.777922 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0d9bebc9-9e67-4019-bdf8-22e78dfc3d14","Type":"ContainerDied","Data":"db5e53906e871ace039a809b4c17e0f0a9393b7521bbea23546882f45795c673"} Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.778011 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.778281 4842 scope.go:117] "RemoveContainer" containerID="7ef2e70ff07365f726387024ecff0fabe2cd2d02cae00c3b439c9a6c10f2e47d" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.812901 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.836954 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.846360 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 07:08:05 crc kubenswrapper[4842]: E0202 07:08:05.846813 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d9bebc9-9e67-4019-bdf8-22e78dfc3d14" containerName="kube-state-metrics" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.846831 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d9bebc9-9e67-4019-bdf8-22e78dfc3d14" containerName="kube-state-metrics" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.847002 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d9bebc9-9e67-4019-bdf8-22e78dfc3d14" containerName="kube-state-metrics" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.847701 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.849369 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.849725 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.856446 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.938946 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.939233 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.944376 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.977006 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " pod="openstack/kube-state-metrics-0" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.977938 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x268\" (UniqueName: \"kubernetes.io/projected/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-api-access-7x268\") pod \"kube-state-metrics-0\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " pod="openstack/kube-state-metrics-0" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.978157 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " pod="openstack/kube-state-metrics-0" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.978245 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " pod="openstack/kube-state-metrics-0" Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.080257 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " pod="openstack/kube-state-metrics-0" Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.080372 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x268\" (UniqueName: \"kubernetes.io/projected/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-api-access-7x268\") pod \"kube-state-metrics-0\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " pod="openstack/kube-state-metrics-0" Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.080481 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " pod="openstack/kube-state-metrics-0" Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.080525 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " pod="openstack/kube-state-metrics-0" Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.085656 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " pod="openstack/kube-state-metrics-0" Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.087055 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " pod="openstack/kube-state-metrics-0" Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.095300 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " pod="openstack/kube-state-metrics-0" Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.107191 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x268\" (UniqueName: \"kubernetes.io/projected/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-api-access-7x268\") pod \"kube-state-metrics-0\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " pod="openstack/kube-state-metrics-0" Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.201385 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.207298 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.207762 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerName="ceilometer-central-agent" containerID="cri-o://dc569d8f3de413d032683c9e0f08d75961dc5c32a972aa6f61cd2c9ca65e212c" gracePeriod=30 Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.207986 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerName="proxy-httpd" containerID="cri-o://de54c85c664eebfb9f0ff8f62d6d8f496165521841ce9cb84ff69597b7e01b01" gracePeriod=30 Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.208102 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerName="sg-core" containerID="cri-o://c9cbee5e2b6b132dbb12fd1119aa52ef677a82f95da8f0f9cc5627f485065f70" gracePeriod=30 Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.208202 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerName="ceilometer-notification-agent" containerID="cri-o://178309bc38cc30e5625354e994a421729d94b675722d58e99b117553018f4ef3" gracePeriod=30 Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.702150 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 07:08:06 crc kubenswrapper[4842]: W0202 07:08:06.704534 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b11cfdf_ed7a_48ce_97eb_e03cd6be314c.slice/crio-c5471f47cbc6e33e200626c1c2261b0fedfaae9cf67bbd6b8d7f8382239e8d5f WatchSource:0}: Error finding container c5471f47cbc6e33e200626c1c2261b0fedfaae9cf67bbd6b8d7f8382239e8d5f: Status 404 returned error can't find the container with id c5471f47cbc6e33e200626c1c2261b0fedfaae9cf67bbd6b8d7f8382239e8d5f Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.796094 4842 generic.go:334] "Generic (PLEG): container finished" podID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerID="de54c85c664eebfb9f0ff8f62d6d8f496165521841ce9cb84ff69597b7e01b01" exitCode=0 Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.796484 4842 generic.go:334] "Generic (PLEG): container finished" podID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerID="c9cbee5e2b6b132dbb12fd1119aa52ef677a82f95da8f0f9cc5627f485065f70" exitCode=2 Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.796169 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf0e5e43-2690-43bd-8bc5-412e93b15aa7","Type":"ContainerDied","Data":"de54c85c664eebfb9f0ff8f62d6d8f496165521841ce9cb84ff69597b7e01b01"} Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.796563 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf0e5e43-2690-43bd-8bc5-412e93b15aa7","Type":"ContainerDied","Data":"c9cbee5e2b6b132dbb12fd1119aa52ef677a82f95da8f0f9cc5627f485065f70"} Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.796504 4842 generic.go:334] "Generic (PLEG): container finished" podID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerID="dc569d8f3de413d032683c9e0f08d75961dc5c32a972aa6f61cd2c9ca65e212c" exitCode=0 Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.796609 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf0e5e43-2690-43bd-8bc5-412e93b15aa7","Type":"ContainerDied","Data":"dc569d8f3de413d032683c9e0f08d75961dc5c32a972aa6f61cd2c9ca65e212c"} Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.798535 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c","Type":"ContainerStarted","Data":"c5471f47cbc6e33e200626c1c2261b0fedfaae9cf67bbd6b8d7f8382239e8d5f"} Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.807559 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 07:08:07 crc kubenswrapper[4842]: I0202 07:08:07.450771 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d9bebc9-9e67-4019-bdf8-22e78dfc3d14" path="/var/lib/kubelet/pods/0d9bebc9-9e67-4019-bdf8-22e78dfc3d14/volumes" Feb 02 07:08:07 crc kubenswrapper[4842]: I0202 07:08:07.809444 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c","Type":"ContainerStarted","Data":"75aec13501e8ac4a78490209fc3281c84b435ac2ebcc48667746bb6eb38e36e9"} Feb 02 07:08:07 crc kubenswrapper[4842]: I0202 07:08:07.827855 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.423788588 podStartE2EDuration="2.827835899s" podCreationTimestamp="2026-02-02 07:08:05 +0000 UTC" firstStartedPulling="2026-02-02 07:08:06.706745006 +0000 UTC m=+1312.084012928" lastFinishedPulling="2026-02-02 07:08:07.110792327 +0000 UTC m=+1312.488060239" observedRunningTime="2026-02-02 07:08:07.826600849 +0000 UTC m=+1313.203868761" watchObservedRunningTime="2026-02-02 07:08:07.827835899 +0000 UTC m=+1313.205103811" Feb 02 07:08:08 crc kubenswrapper[4842]: I0202 07:08:08.821249 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.553260 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.657985 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-log-httpd\") pod \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.658083 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-sg-core-conf-yaml\") pod \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.658138 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r667c\" (UniqueName: \"kubernetes.io/projected/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-kube-api-access-r667c\") pod \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.658183 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-scripts\") pod \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.658205 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-config-data\") pod \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.658244 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-run-httpd\") pod \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.658297 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-combined-ca-bundle\") pod \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.658861 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cf0e5e43-2690-43bd-8bc5-412e93b15aa7" (UID: "cf0e5e43-2690-43bd-8bc5-412e93b15aa7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.659981 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cf0e5e43-2690-43bd-8bc5-412e93b15aa7" (UID: "cf0e5e43-2690-43bd-8bc5-412e93b15aa7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.666447 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-kube-api-access-r667c" (OuterVolumeSpecName: "kube-api-access-r667c") pod "cf0e5e43-2690-43bd-8bc5-412e93b15aa7" (UID: "cf0e5e43-2690-43bd-8bc5-412e93b15aa7"). InnerVolumeSpecName "kube-api-access-r667c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.682503 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-scripts" (OuterVolumeSpecName: "scripts") pod "cf0e5e43-2690-43bd-8bc5-412e93b15aa7" (UID: "cf0e5e43-2690-43bd-8bc5-412e93b15aa7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.723544 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cf0e5e43-2690-43bd-8bc5-412e93b15aa7" (UID: "cf0e5e43-2690-43bd-8bc5-412e93b15aa7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.760652 4842 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.760888 4842 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.760898 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r667c\" (UniqueName: \"kubernetes.io/projected/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-kube-api-access-r667c\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.760908 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.760915 4842 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.767765 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf0e5e43-2690-43bd-8bc5-412e93b15aa7" (UID: "cf0e5e43-2690-43bd-8bc5-412e93b15aa7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.782380 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-config-data" (OuterVolumeSpecName: "config-data") pod "cf0e5e43-2690-43bd-8bc5-412e93b15aa7" (UID: "cf0e5e43-2690-43bd-8bc5-412e93b15aa7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.807780 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.836164 4842 generic.go:334] "Generic (PLEG): container finished" podID="1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36" containerID="3469511ccff43b1ee6fd3291450d98a0112ccaac41021b8b1475c185a2a9fdc7" exitCode=137 Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.837264 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36","Type":"ContainerDied","Data":"3469511ccff43b1ee6fd3291450d98a0112ccaac41021b8b1475c185a2a9fdc7"} Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.837352 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36","Type":"ContainerDied","Data":"96da2ab68db04d21f4a7c4434a8ff3b113106acfae59f50f9689e724aa76088b"} Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.837428 4842 scope.go:117] "RemoveContainer" containerID="3469511ccff43b1ee6fd3291450d98a0112ccaac41021b8b1475c185a2a9fdc7" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.837668 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.844280 4842 generic.go:334] "Generic (PLEG): container finished" podID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerID="178309bc38cc30e5625354e994a421729d94b675722d58e99b117553018f4ef3" exitCode=0 Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.844440 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.844488 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf0e5e43-2690-43bd-8bc5-412e93b15aa7","Type":"ContainerDied","Data":"178309bc38cc30e5625354e994a421729d94b675722d58e99b117553018f4ef3"} Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.844875 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf0e5e43-2690-43bd-8bc5-412e93b15aa7","Type":"ContainerDied","Data":"11a6c57757bd099cc7d5233c6d0b0381d8088a06d822f2cec437e583d985118d"} Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.862004 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pst98\" (UniqueName: \"kubernetes.io/projected/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-kube-api-access-pst98\") pod \"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36\" (UID: \"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36\") " Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.862055 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-combined-ca-bundle\") pod \"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36\" (UID: \"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36\") " Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.862136 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-config-data\") pod \"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36\" (UID: \"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36\") " Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.862788 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.862825 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.870500 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-kube-api-access-pst98" (OuterVolumeSpecName: "kube-api-access-pst98") pod "1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36" (UID: "1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36"). InnerVolumeSpecName "kube-api-access-pst98". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.874379 4842 scope.go:117] "RemoveContainer" containerID="3469511ccff43b1ee6fd3291450d98a0112ccaac41021b8b1475c185a2a9fdc7" Feb 02 07:08:09 crc kubenswrapper[4842]: E0202 07:08:09.877311 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3469511ccff43b1ee6fd3291450d98a0112ccaac41021b8b1475c185a2a9fdc7\": container with ID starting with 3469511ccff43b1ee6fd3291450d98a0112ccaac41021b8b1475c185a2a9fdc7 not found: ID does not exist" containerID="3469511ccff43b1ee6fd3291450d98a0112ccaac41021b8b1475c185a2a9fdc7" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.877364 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3469511ccff43b1ee6fd3291450d98a0112ccaac41021b8b1475c185a2a9fdc7"} err="failed to get container status \"3469511ccff43b1ee6fd3291450d98a0112ccaac41021b8b1475c185a2a9fdc7\": rpc error: code = NotFound desc = could not find container \"3469511ccff43b1ee6fd3291450d98a0112ccaac41021b8b1475c185a2a9fdc7\": container with ID starting with 3469511ccff43b1ee6fd3291450d98a0112ccaac41021b8b1475c185a2a9fdc7 not found: ID does not exist" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.877394 4842 scope.go:117] "RemoveContainer" containerID="de54c85c664eebfb9f0ff8f62d6d8f496165521841ce9cb84ff69597b7e01b01" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.896497 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-config-data" (OuterVolumeSpecName: "config-data") pod "1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36" (UID: "1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.900247 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.905077 4842 scope.go:117] "RemoveContainer" containerID="c9cbee5e2b6b132dbb12fd1119aa52ef677a82f95da8f0f9cc5627f485065f70" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.911409 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.917476 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:08:09 crc kubenswrapper[4842]: E0202 07:08:09.917810 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerName="sg-core" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.917828 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerName="sg-core" Feb 02 07:08:09 crc kubenswrapper[4842]: E0202 07:08:09.917837 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerName="proxy-httpd" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.917844 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerName="proxy-httpd" Feb 02 07:08:09 crc kubenswrapper[4842]: E0202 07:08:09.917857 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerName="ceilometer-central-agent" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.917863 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerName="ceilometer-central-agent" Feb 02 07:08:09 crc kubenswrapper[4842]: E0202 07:08:09.917891 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.917897 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 07:08:09 crc kubenswrapper[4842]: E0202 07:08:09.917910 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerName="ceilometer-notification-agent" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.917918 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerName="ceilometer-notification-agent" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.918092 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerName="ceilometer-notification-agent" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.918110 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerName="sg-core" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.918125 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.918136 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerName="proxy-httpd" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.918144 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerName="ceilometer-central-agent" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.919652 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36" (UID: "1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.919722 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.950820 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.951118 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.951906 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.961909 4842 scope.go:117] "RemoveContainer" containerID="178309bc38cc30e5625354e994a421729d94b675722d58e99b117553018f4ef3" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.963910 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.963990 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e9dbec6-ac74-4b3c-8c31-734a574dade3-run-httpd\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.964019 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.964045 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bstwv\" (UniqueName: \"kubernetes.io/projected/3e9dbec6-ac74-4b3c-8c31-734a574dade3-kube-api-access-bstwv\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.964105 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e9dbec6-ac74-4b3c-8c31-734a574dade3-log-httpd\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.964139 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.964156 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-config-data\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.964176 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-scripts\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.964244 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pst98\" (UniqueName: \"kubernetes.io/projected/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-kube-api-access-pst98\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.964259 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.964272 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.980165 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.990209 4842 scope.go:117] "RemoveContainer" containerID="dc569d8f3de413d032683c9e0f08d75961dc5c32a972aa6f61cd2c9ca65e212c" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.012377 4842 scope.go:117] "RemoveContainer" containerID="de54c85c664eebfb9f0ff8f62d6d8f496165521841ce9cb84ff69597b7e01b01" Feb 02 07:08:10 crc kubenswrapper[4842]: E0202 07:08:10.012876 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de54c85c664eebfb9f0ff8f62d6d8f496165521841ce9cb84ff69597b7e01b01\": container with ID starting with de54c85c664eebfb9f0ff8f62d6d8f496165521841ce9cb84ff69597b7e01b01 not found: ID does not exist" containerID="de54c85c664eebfb9f0ff8f62d6d8f496165521841ce9cb84ff69597b7e01b01" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.012911 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de54c85c664eebfb9f0ff8f62d6d8f496165521841ce9cb84ff69597b7e01b01"} err="failed to get container status \"de54c85c664eebfb9f0ff8f62d6d8f496165521841ce9cb84ff69597b7e01b01\": rpc error: code = NotFound desc = could not find container \"de54c85c664eebfb9f0ff8f62d6d8f496165521841ce9cb84ff69597b7e01b01\": container with ID starting with de54c85c664eebfb9f0ff8f62d6d8f496165521841ce9cb84ff69597b7e01b01 not found: ID does not exist" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.012952 4842 scope.go:117] "RemoveContainer" containerID="c9cbee5e2b6b132dbb12fd1119aa52ef677a82f95da8f0f9cc5627f485065f70" Feb 02 07:08:10 crc kubenswrapper[4842]: E0202 07:08:10.013381 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9cbee5e2b6b132dbb12fd1119aa52ef677a82f95da8f0f9cc5627f485065f70\": container with ID starting with c9cbee5e2b6b132dbb12fd1119aa52ef677a82f95da8f0f9cc5627f485065f70 not found: ID does not exist" containerID="c9cbee5e2b6b132dbb12fd1119aa52ef677a82f95da8f0f9cc5627f485065f70" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.013413 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9cbee5e2b6b132dbb12fd1119aa52ef677a82f95da8f0f9cc5627f485065f70"} err="failed to get container status \"c9cbee5e2b6b132dbb12fd1119aa52ef677a82f95da8f0f9cc5627f485065f70\": rpc error: code = NotFound desc = could not find container \"c9cbee5e2b6b132dbb12fd1119aa52ef677a82f95da8f0f9cc5627f485065f70\": container with ID starting with c9cbee5e2b6b132dbb12fd1119aa52ef677a82f95da8f0f9cc5627f485065f70 not found: ID does not exist" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.013436 4842 scope.go:117] "RemoveContainer" containerID="178309bc38cc30e5625354e994a421729d94b675722d58e99b117553018f4ef3" Feb 02 07:08:10 crc kubenswrapper[4842]: E0202 07:08:10.013725 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"178309bc38cc30e5625354e994a421729d94b675722d58e99b117553018f4ef3\": container with ID starting with 178309bc38cc30e5625354e994a421729d94b675722d58e99b117553018f4ef3 not found: ID does not exist" containerID="178309bc38cc30e5625354e994a421729d94b675722d58e99b117553018f4ef3" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.013746 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"178309bc38cc30e5625354e994a421729d94b675722d58e99b117553018f4ef3"} err="failed to get container status \"178309bc38cc30e5625354e994a421729d94b675722d58e99b117553018f4ef3\": rpc error: code = NotFound desc = could not find container \"178309bc38cc30e5625354e994a421729d94b675722d58e99b117553018f4ef3\": container with ID starting with 178309bc38cc30e5625354e994a421729d94b675722d58e99b117553018f4ef3 not found: ID does not exist" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.013757 4842 scope.go:117] "RemoveContainer" containerID="dc569d8f3de413d032683c9e0f08d75961dc5c32a972aa6f61cd2c9ca65e212c" Feb 02 07:08:10 crc kubenswrapper[4842]: E0202 07:08:10.013971 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc569d8f3de413d032683c9e0f08d75961dc5c32a972aa6f61cd2c9ca65e212c\": container with ID starting with dc569d8f3de413d032683c9e0f08d75961dc5c32a972aa6f61cd2c9ca65e212c not found: ID does not exist" containerID="dc569d8f3de413d032683c9e0f08d75961dc5c32a972aa6f61cd2c9ca65e212c" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.013990 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc569d8f3de413d032683c9e0f08d75961dc5c32a972aa6f61cd2c9ca65e212c"} err="failed to get container status \"dc569d8f3de413d032683c9e0f08d75961dc5c32a972aa6f61cd2c9ca65e212c\": rpc error: code = NotFound desc = could not find container \"dc569d8f3de413d032683c9e0f08d75961dc5c32a972aa6f61cd2c9ca65e212c\": container with ID starting with dc569d8f3de413d032683c9e0f08d75961dc5c32a972aa6f61cd2c9ca65e212c not found: ID does not exist" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.066004 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.066060 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bstwv\" (UniqueName: \"kubernetes.io/projected/3e9dbec6-ac74-4b3c-8c31-734a574dade3-kube-api-access-bstwv\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.066120 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e9dbec6-ac74-4b3c-8c31-734a574dade3-log-httpd\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.066175 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-config-data\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.066207 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.066249 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-scripts\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.066343 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.066394 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e9dbec6-ac74-4b3c-8c31-734a574dade3-run-httpd\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.066768 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e9dbec6-ac74-4b3c-8c31-734a574dade3-log-httpd\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.066836 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e9dbec6-ac74-4b3c-8c31-734a574dade3-run-httpd\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.069891 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.069957 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-scripts\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.070394 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-config-data\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.070759 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.071268 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.081265 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bstwv\" (UniqueName: \"kubernetes.io/projected/3e9dbec6-ac74-4b3c-8c31-734a574dade3-kube-api-access-bstwv\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.246902 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.271643 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.271882 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.285469 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.286787 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.290359 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.290532 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.290377 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.295334 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.379628 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.379878 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.379902 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm2d8\" (UniqueName: \"kubernetes.io/projected/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-kube-api-access-nm2d8\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.380085 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.380291 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.482292 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.482355 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.482377 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm2d8\" (UniqueName: \"kubernetes.io/projected/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-kube-api-access-nm2d8\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.482488 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.482592 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.487810 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.489233 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.490822 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.498871 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.502283 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm2d8\" (UniqueName: \"kubernetes.io/projected/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-kube-api-access-nm2d8\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.683696 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: W0202 07:08:10.740354 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e9dbec6_ac74_4b3c_8c31_734a574dade3.slice/crio-ecc01ca8f44e82d84f820f5c98e74898089c47ea6d2ab1ec8e4f74d3d256fd92 WatchSource:0}: Error finding container ecc01ca8f44e82d84f820f5c98e74898089c47ea6d2ab1ec8e4f74d3d256fd92: Status 404 returned error can't find the container with id ecc01ca8f44e82d84f820f5c98e74898089c47ea6d2ab1ec8e4f74d3d256fd92 Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.746538 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.859750 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e9dbec6-ac74-4b3c-8c31-734a574dade3","Type":"ContainerStarted","Data":"ecc01ca8f44e82d84f820f5c98e74898089c47ea6d2ab1ec8e4f74d3d256fd92"} Feb 02 07:08:11 crc kubenswrapper[4842]: I0202 07:08:11.004548 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 07:08:11 crc kubenswrapper[4842]: W0202 07:08:11.008119 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a6e38b7_4a6d_4d93_af3d_5abac4efc44d.slice/crio-c35e3662427ebf1f8f424857e434ccf28b83374ce8c58a3384c27005fe0af7e8 WatchSource:0}: Error finding container c35e3662427ebf1f8f424857e434ccf28b83374ce8c58a3384c27005fe0af7e8: Status 404 returned error can't find the container with id c35e3662427ebf1f8f424857e434ccf28b83374ce8c58a3384c27005fe0af7e8 Feb 02 07:08:11 crc kubenswrapper[4842]: I0202 07:08:11.453187 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36" path="/var/lib/kubelet/pods/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36/volumes" Feb 02 07:08:11 crc kubenswrapper[4842]: I0202 07:08:11.454768 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" path="/var/lib/kubelet/pods/cf0e5e43-2690-43bd-8bc5-412e93b15aa7/volumes" Feb 02 07:08:11 crc kubenswrapper[4842]: I0202 07:08:11.883430 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e9dbec6-ac74-4b3c-8c31-734a574dade3","Type":"ContainerStarted","Data":"f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0"} Feb 02 07:08:11 crc kubenswrapper[4842]: I0202 07:08:11.885767 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d","Type":"ContainerStarted","Data":"19ce3a33fe25413f4f312112bb88f2cc8ceb19171589dbec9313d4c51f900ca1"} Feb 02 07:08:11 crc kubenswrapper[4842]: I0202 07:08:11.885819 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d","Type":"ContainerStarted","Data":"c35e3662427ebf1f8f424857e434ccf28b83374ce8c58a3384c27005fe0af7e8"} Feb 02 07:08:11 crc kubenswrapper[4842]: I0202 07:08:11.916812 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.916789691 podStartE2EDuration="1.916789691s" podCreationTimestamp="2026-02-02 07:08:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:08:11.914825432 +0000 UTC m=+1317.292093354" watchObservedRunningTime="2026-02-02 07:08:11.916789691 +0000 UTC m=+1317.294057613" Feb 02 07:08:11 crc kubenswrapper[4842]: I0202 07:08:11.973524 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 07:08:11 crc kubenswrapper[4842]: I0202 07:08:11.974563 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 07:08:11 crc kubenswrapper[4842]: I0202 07:08:11.980814 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 07:08:11 crc kubenswrapper[4842]: I0202 07:08:11.990996 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 07:08:12 crc kubenswrapper[4842]: I0202 07:08:12.897935 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e9dbec6-ac74-4b3c-8c31-734a574dade3","Type":"ContainerStarted","Data":"dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868"} Feb 02 07:08:12 crc kubenswrapper[4842]: I0202 07:08:12.898584 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 07:08:12 crc kubenswrapper[4842]: I0202 07:08:12.898620 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e9dbec6-ac74-4b3c-8c31-734a574dade3","Type":"ContainerStarted","Data":"f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489"} Feb 02 07:08:12 crc kubenswrapper[4842]: I0202 07:08:12.902023 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.086763 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-8dp78"] Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.091586 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.107402 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-8dp78"] Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.139190 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg6j8\" (UniqueName: \"kubernetes.io/projected/82827ec9-ac05-41ab-988c-99083ccdb949-kube-api-access-vg6j8\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.139252 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.139287 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.139323 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.139444 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-dns-svc\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.139600 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-config\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.240602 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg6j8\" (UniqueName: \"kubernetes.io/projected/82827ec9-ac05-41ab-988c-99083ccdb949-kube-api-access-vg6j8\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.240647 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.240693 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.240729 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.240745 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-dns-svc\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.240791 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-config\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.241639 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-config\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.242370 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.242862 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.243366 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.243872 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-dns-svc\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.258604 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg6j8\" (UniqueName: \"kubernetes.io/projected/82827ec9-ac05-41ab-988c-99083ccdb949-kube-api-access-vg6j8\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.415919 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.959348 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-8dp78"] Feb 02 07:08:14 crc kubenswrapper[4842]: I0202 07:08:14.918097 4842 generic.go:334] "Generic (PLEG): container finished" podID="82827ec9-ac05-41ab-988c-99083ccdb949" containerID="8bb94b1491e283b01c189ac6006d3fc23945dfbdff62fb805e090497b073e7c4" exitCode=0 Feb 02 07:08:14 crc kubenswrapper[4842]: I0202 07:08:14.918201 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-8dp78" event={"ID":"82827ec9-ac05-41ab-988c-99083ccdb949","Type":"ContainerDied","Data":"8bb94b1491e283b01c189ac6006d3fc23945dfbdff62fb805e090497b073e7c4"} Feb 02 07:08:14 crc kubenswrapper[4842]: I0202 07:08:14.918500 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-8dp78" event={"ID":"82827ec9-ac05-41ab-988c-99083ccdb949","Type":"ContainerStarted","Data":"3b795fd687296b78b29dffde7f9f5a14bcbd688f6a97aac6389de0b8b43b6094"} Feb 02 07:08:15 crc kubenswrapper[4842]: I0202 07:08:15.420650 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:08:15 crc kubenswrapper[4842]: I0202 07:08:15.684074 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:15 crc kubenswrapper[4842]: I0202 07:08:15.933480 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e9dbec6-ac74-4b3c-8c31-734a574dade3","Type":"ContainerStarted","Data":"7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a"} Feb 02 07:08:15 crc kubenswrapper[4842]: I0202 07:08:15.934687 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 07:08:15 crc kubenswrapper[4842]: I0202 07:08:15.938425 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" containerName="nova-api-log" containerID="cri-o://04b4da4c7cdb199c83e91cbd927bc8dcd576a40d0a7ecd072203710a818e10c5" gracePeriod=30 Feb 02 07:08:15 crc kubenswrapper[4842]: I0202 07:08:15.939355 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-8dp78" event={"ID":"82827ec9-ac05-41ab-988c-99083ccdb949","Type":"ContainerStarted","Data":"b1f4bec090a15a8f33492373710dad94faf1e40a938d6cc9e964fd93f07eecf3"} Feb 02 07:08:15 crc kubenswrapper[4842]: I0202 07:08:15.939386 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:15 crc kubenswrapper[4842]: I0202 07:08:15.939435 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" containerName="nova-api-api" containerID="cri-o://3fb1e025904b8d9ff9892132492b878acb177e84b913bbf189ea1d283f0d92c1" gracePeriod=30 Feb 02 07:08:15 crc kubenswrapper[4842]: I0202 07:08:15.963844 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:08:15 crc kubenswrapper[4842]: I0202 07:08:15.992566 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.9197622340000002 podStartE2EDuration="6.992545885s" podCreationTimestamp="2026-02-02 07:08:09 +0000 UTC" firstStartedPulling="2026-02-02 07:08:10.742290512 +0000 UTC m=+1316.119558424" lastFinishedPulling="2026-02-02 07:08:14.815074153 +0000 UTC m=+1320.192342075" observedRunningTime="2026-02-02 07:08:15.983285005 +0000 UTC m=+1321.360552927" watchObservedRunningTime="2026-02-02 07:08:15.992545885 +0000 UTC m=+1321.369813797" Feb 02 07:08:16 crc kubenswrapper[4842]: I0202 07:08:16.012163 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ddd577785-8dp78" podStartSLOduration=3.012140521 podStartE2EDuration="3.012140521s" podCreationTimestamp="2026-02-02 07:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:08:16.004099611 +0000 UTC m=+1321.381367543" watchObservedRunningTime="2026-02-02 07:08:16.012140521 +0000 UTC m=+1321.389408423" Feb 02 07:08:16 crc kubenswrapper[4842]: I0202 07:08:16.221099 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 02 07:08:16 crc kubenswrapper[4842]: I0202 07:08:16.951064 4842 generic.go:334] "Generic (PLEG): container finished" podID="c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" containerID="04b4da4c7cdb199c83e91cbd927bc8dcd576a40d0a7ecd072203710a818e10c5" exitCode=143 Feb 02 07:08:16 crc kubenswrapper[4842]: I0202 07:08:16.951149 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2","Type":"ContainerDied","Data":"04b4da4c7cdb199c83e91cbd927bc8dcd576a40d0a7ecd072203710a818e10c5"} Feb 02 07:08:17 crc kubenswrapper[4842]: I0202 07:08:17.961261 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerName="ceilometer-central-agent" containerID="cri-o://f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0" gracePeriod=30 Feb 02 07:08:17 crc kubenswrapper[4842]: I0202 07:08:17.962364 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerName="proxy-httpd" containerID="cri-o://7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a" gracePeriod=30 Feb 02 07:08:17 crc kubenswrapper[4842]: I0202 07:08:17.962489 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerName="sg-core" containerID="cri-o://dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868" gracePeriod=30 Feb 02 07:08:17 crc kubenswrapper[4842]: I0202 07:08:17.962560 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerName="ceilometer-notification-agent" containerID="cri-o://f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489" gracePeriod=30 Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.702990 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.754465 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-sg-core-conf-yaml\") pod \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.754610 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e9dbec6-ac74-4b3c-8c31-734a574dade3-log-httpd\") pod \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.754643 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-ceilometer-tls-certs\") pod \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.754671 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-combined-ca-bundle\") pod \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.754719 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e9dbec6-ac74-4b3c-8c31-734a574dade3-run-httpd\") pod \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.754737 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bstwv\" (UniqueName: \"kubernetes.io/projected/3e9dbec6-ac74-4b3c-8c31-734a574dade3-kube-api-access-bstwv\") pod \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.754774 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-scripts\") pod \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.754812 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-config-data\") pod \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.757573 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e9dbec6-ac74-4b3c-8c31-734a574dade3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3e9dbec6-ac74-4b3c-8c31-734a574dade3" (UID: "3e9dbec6-ac74-4b3c-8c31-734a574dade3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.758095 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e9dbec6-ac74-4b3c-8c31-734a574dade3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3e9dbec6-ac74-4b3c-8c31-734a574dade3" (UID: "3e9dbec6-ac74-4b3c-8c31-734a574dade3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.762169 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-scripts" (OuterVolumeSpecName: "scripts") pod "3e9dbec6-ac74-4b3c-8c31-734a574dade3" (UID: "3e9dbec6-ac74-4b3c-8c31-734a574dade3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.762589 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e9dbec6-ac74-4b3c-8c31-734a574dade3-kube-api-access-bstwv" (OuterVolumeSpecName: "kube-api-access-bstwv") pod "3e9dbec6-ac74-4b3c-8c31-734a574dade3" (UID: "3e9dbec6-ac74-4b3c-8c31-734a574dade3"). InnerVolumeSpecName "kube-api-access-bstwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.799364 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3e9dbec6-ac74-4b3c-8c31-734a574dade3" (UID: "3e9dbec6-ac74-4b3c-8c31-734a574dade3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.810638 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "3e9dbec6-ac74-4b3c-8c31-734a574dade3" (UID: "3e9dbec6-ac74-4b3c-8c31-734a574dade3"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.834158 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e9dbec6-ac74-4b3c-8c31-734a574dade3" (UID: "3e9dbec6-ac74-4b3c-8c31-734a574dade3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.856643 4842 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.856969 4842 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e9dbec6-ac74-4b3c-8c31-734a574dade3-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.857062 4842 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.857140 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.857230 4842 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e9dbec6-ac74-4b3c-8c31-734a574dade3-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.857319 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bstwv\" (UniqueName: \"kubernetes.io/projected/3e9dbec6-ac74-4b3c-8c31-734a574dade3-kube-api-access-bstwv\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.857396 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.874175 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-config-data" (OuterVolumeSpecName: "config-data") pod "3e9dbec6-ac74-4b3c-8c31-734a574dade3" (UID: "3e9dbec6-ac74-4b3c-8c31-734a574dade3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.959770 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.987245 4842 generic.go:334] "Generic (PLEG): container finished" podID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerID="7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a" exitCode=0 Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.987275 4842 generic.go:334] "Generic (PLEG): container finished" podID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerID="dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868" exitCode=2 Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.987286 4842 generic.go:334] "Generic (PLEG): container finished" podID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerID="f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489" exitCode=0 Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.987294 4842 generic.go:334] "Generic (PLEG): container finished" podID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerID="f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0" exitCode=0 Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.987318 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e9dbec6-ac74-4b3c-8c31-734a574dade3","Type":"ContainerDied","Data":"7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a"} Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.987328 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.987350 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e9dbec6-ac74-4b3c-8c31-734a574dade3","Type":"ContainerDied","Data":"dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868"} Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.987367 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e9dbec6-ac74-4b3c-8c31-734a574dade3","Type":"ContainerDied","Data":"f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489"} Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.987379 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e9dbec6-ac74-4b3c-8c31-734a574dade3","Type":"ContainerDied","Data":"f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0"} Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.987388 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e9dbec6-ac74-4b3c-8c31-734a574dade3","Type":"ContainerDied","Data":"ecc01ca8f44e82d84f820f5c98e74898089c47ea6d2ab1ec8e4f74d3d256fd92"} Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.987403 4842 scope.go:117] "RemoveContainer" containerID="7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.017268 4842 scope.go:117] "RemoveContainer" containerID="dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.032446 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.041638 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.057042 4842 scope.go:117] "RemoveContainer" containerID="f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.073803 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:08:19 crc kubenswrapper[4842]: E0202 07:08:19.074259 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerName="proxy-httpd" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.074272 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerName="proxy-httpd" Feb 02 07:08:19 crc kubenswrapper[4842]: E0202 07:08:19.074296 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerName="ceilometer-notification-agent" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.074302 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerName="ceilometer-notification-agent" Feb 02 07:08:19 crc kubenswrapper[4842]: E0202 07:08:19.074324 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerName="ceilometer-central-agent" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.074330 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerName="ceilometer-central-agent" Feb 02 07:08:19 crc kubenswrapper[4842]: E0202 07:08:19.074343 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerName="sg-core" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.074349 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerName="sg-core" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.074520 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerName="sg-core" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.074530 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerName="ceilometer-central-agent" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.074538 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerName="proxy-httpd" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.074556 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerName="ceilometer-notification-agent" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.076264 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.082655 4842 scope.go:117] "RemoveContainer" containerID="f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.083727 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.084498 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.092128 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.092994 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.163495 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-scripts\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.163805 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/174fcd53-40ab-4d19-a317-bc5cd117d2a4-log-httpd\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.163920 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-config-data\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.164018 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4btlq\" (UniqueName: \"kubernetes.io/projected/174fcd53-40ab-4d19-a317-bc5cd117d2a4-kube-api-access-4btlq\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.164247 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.164372 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.164409 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.164506 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/174fcd53-40ab-4d19-a317-bc5cd117d2a4-run-httpd\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.256813 4842 scope.go:117] "RemoveContainer" containerID="7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a" Feb 02 07:08:19 crc kubenswrapper[4842]: E0202 07:08:19.258483 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a\": container with ID starting with 7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a not found: ID does not exist" containerID="7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.258535 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a"} err="failed to get container status \"7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a\": rpc error: code = NotFound desc = could not find container \"7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a\": container with ID starting with 7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a not found: ID does not exist" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.258582 4842 scope.go:117] "RemoveContainer" containerID="dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868" Feb 02 07:08:19 crc kubenswrapper[4842]: E0202 07:08:19.258930 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868\": container with ID starting with dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868 not found: ID does not exist" containerID="dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.258986 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868"} err="failed to get container status \"dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868\": rpc error: code = NotFound desc = could not find container \"dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868\": container with ID starting with dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868 not found: ID does not exist" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.259018 4842 scope.go:117] "RemoveContainer" containerID="f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489" Feb 02 07:08:19 crc kubenswrapper[4842]: E0202 07:08:19.259325 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489\": container with ID starting with f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489 not found: ID does not exist" containerID="f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.259354 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489"} err="failed to get container status \"f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489\": rpc error: code = NotFound desc = could not find container \"f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489\": container with ID starting with f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489 not found: ID does not exist" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.259377 4842 scope.go:117] "RemoveContainer" containerID="f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0" Feb 02 07:08:19 crc kubenswrapper[4842]: E0202 07:08:19.259633 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0\": container with ID starting with f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0 not found: ID does not exist" containerID="f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.259677 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0"} err="failed to get container status \"f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0\": rpc error: code = NotFound desc = could not find container \"f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0\": container with ID starting with f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0 not found: ID does not exist" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.259694 4842 scope.go:117] "RemoveContainer" containerID="7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.259931 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a"} err="failed to get container status \"7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a\": rpc error: code = NotFound desc = could not find container \"7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a\": container with ID starting with 7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a not found: ID does not exist" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.259951 4842 scope.go:117] "RemoveContainer" containerID="dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.260164 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868"} err="failed to get container status \"dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868\": rpc error: code = NotFound desc = could not find container \"dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868\": container with ID starting with dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868 not found: ID does not exist" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.260183 4842 scope.go:117] "RemoveContainer" containerID="f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.260455 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489"} err="failed to get container status \"f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489\": rpc error: code = NotFound desc = could not find container \"f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489\": container with ID starting with f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489 not found: ID does not exist" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.260470 4842 scope.go:117] "RemoveContainer" containerID="f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.260725 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0"} err="failed to get container status \"f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0\": rpc error: code = NotFound desc = could not find container \"f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0\": container with ID starting with f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0 not found: ID does not exist" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.260743 4842 scope.go:117] "RemoveContainer" containerID="7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.260985 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a"} err="failed to get container status \"7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a\": rpc error: code = NotFound desc = could not find container \"7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a\": container with ID starting with 7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a not found: ID does not exist" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.261012 4842 scope.go:117] "RemoveContainer" containerID="dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.261456 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868"} err="failed to get container status \"dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868\": rpc error: code = NotFound desc = could not find container \"dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868\": container with ID starting with dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868 not found: ID does not exist" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.261481 4842 scope.go:117] "RemoveContainer" containerID="f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.261750 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489"} err="failed to get container status \"f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489\": rpc error: code = NotFound desc = could not find container \"f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489\": container with ID starting with f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489 not found: ID does not exist" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.261772 4842 scope.go:117] "RemoveContainer" containerID="f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.262104 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0"} err="failed to get container status \"f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0\": rpc error: code = NotFound desc = could not find container \"f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0\": container with ID starting with f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0 not found: ID does not exist" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.262130 4842 scope.go:117] "RemoveContainer" containerID="7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.262430 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a"} err="failed to get container status \"7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a\": rpc error: code = NotFound desc = could not find container \"7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a\": container with ID starting with 7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a not found: ID does not exist" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.262457 4842 scope.go:117] "RemoveContainer" containerID="dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.263550 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868"} err="failed to get container status \"dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868\": rpc error: code = NotFound desc = could not find container \"dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868\": container with ID starting with dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868 not found: ID does not exist" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.263583 4842 scope.go:117] "RemoveContainer" containerID="f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.263829 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489"} err="failed to get container status \"f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489\": rpc error: code = NotFound desc = could not find container \"f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489\": container with ID starting with f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489 not found: ID does not exist" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.263854 4842 scope.go:117] "RemoveContainer" containerID="f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.264143 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0"} err="failed to get container status \"f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0\": rpc error: code = NotFound desc = could not find container \"f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0\": container with ID starting with f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0 not found: ID does not exist" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.266583 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/174fcd53-40ab-4d19-a317-bc5cd117d2a4-run-httpd\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.266998 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-scripts\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.267153 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/174fcd53-40ab-4d19-a317-bc5cd117d2a4-run-httpd\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.267945 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/174fcd53-40ab-4d19-a317-bc5cd117d2a4-log-httpd\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.268088 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-config-data\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.268141 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4btlq\" (UniqueName: \"kubernetes.io/projected/174fcd53-40ab-4d19-a317-bc5cd117d2a4-kube-api-access-4btlq\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.268296 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.268429 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.268442 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/174fcd53-40ab-4d19-a317-bc5cd117d2a4-log-httpd\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.268456 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.273349 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-config-data\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.274675 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.275291 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.275835 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-scripts\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.276497 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.289395 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4btlq\" (UniqueName: \"kubernetes.io/projected/174fcd53-40ab-4d19-a317-bc5cd117d2a4-kube-api-access-4btlq\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.444925 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" path="/var/lib/kubelet/pods/3e9dbec6-ac74-4b3c-8c31-734a574dade3/volumes" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.486885 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.568268 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.572004 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gngbc\" (UniqueName: \"kubernetes.io/projected/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-kube-api-access-gngbc\") pod \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\" (UID: \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\") " Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.572069 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-config-data\") pod \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\" (UID: \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\") " Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.572097 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-combined-ca-bundle\") pod \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\" (UID: \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\") " Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.572146 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-logs\") pod \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\" (UID: \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\") " Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.572523 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-logs" (OuterVolumeSpecName: "logs") pod "c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" (UID: "c80be6c0-a1f6-43d6-ba9d-9affaf8daff2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.573308 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.578914 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-kube-api-access-gngbc" (OuterVolumeSpecName: "kube-api-access-gngbc") pod "c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" (UID: "c80be6c0-a1f6-43d6-ba9d-9affaf8daff2"). InnerVolumeSpecName "kube-api-access-gngbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.602542 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" (UID: "c80be6c0-a1f6-43d6-ba9d-9affaf8daff2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.649088 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-config-data" (OuterVolumeSpecName: "config-data") pod "c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" (UID: "c80be6c0-a1f6-43d6-ba9d-9affaf8daff2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.675441 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gngbc\" (UniqueName: \"kubernetes.io/projected/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-kube-api-access-gngbc\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.675473 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.675482 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.002575 4842 generic.go:334] "Generic (PLEG): container finished" podID="c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" containerID="3fb1e025904b8d9ff9892132492b878acb177e84b913bbf189ea1d283f0d92c1" exitCode=0 Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.002637 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2","Type":"ContainerDied","Data":"3fb1e025904b8d9ff9892132492b878acb177e84b913bbf189ea1d283f0d92c1"} Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.002658 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.002679 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2","Type":"ContainerDied","Data":"1675d09f9cfa207274c23b46f1678c5e2c1bb07719525781e0d993852dd0e316"} Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.002694 4842 scope.go:117] "RemoveContainer" containerID="3fb1e025904b8d9ff9892132492b878acb177e84b913bbf189ea1d283f0d92c1" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.030375 4842 scope.go:117] "RemoveContainer" containerID="04b4da4c7cdb199c83e91cbd927bc8dcd576a40d0a7ecd072203710a818e10c5" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.058133 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.072612 4842 scope.go:117] "RemoveContainer" containerID="3fb1e025904b8d9ff9892132492b878acb177e84b913bbf189ea1d283f0d92c1" Feb 02 07:08:20 crc kubenswrapper[4842]: E0202 07:08:20.073294 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fb1e025904b8d9ff9892132492b878acb177e84b913bbf189ea1d283f0d92c1\": container with ID starting with 3fb1e025904b8d9ff9892132492b878acb177e84b913bbf189ea1d283f0d92c1 not found: ID does not exist" containerID="3fb1e025904b8d9ff9892132492b878acb177e84b913bbf189ea1d283f0d92c1" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.073360 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fb1e025904b8d9ff9892132492b878acb177e84b913bbf189ea1d283f0d92c1"} err="failed to get container status \"3fb1e025904b8d9ff9892132492b878acb177e84b913bbf189ea1d283f0d92c1\": rpc error: code = NotFound desc = could not find container \"3fb1e025904b8d9ff9892132492b878acb177e84b913bbf189ea1d283f0d92c1\": container with ID starting with 3fb1e025904b8d9ff9892132492b878acb177e84b913bbf189ea1d283f0d92c1 not found: ID does not exist" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.073400 4842 scope.go:117] "RemoveContainer" containerID="04b4da4c7cdb199c83e91cbd927bc8dcd576a40d0a7ecd072203710a818e10c5" Feb 02 07:08:20 crc kubenswrapper[4842]: E0202 07:08:20.076962 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04b4da4c7cdb199c83e91cbd927bc8dcd576a40d0a7ecd072203710a818e10c5\": container with ID starting with 04b4da4c7cdb199c83e91cbd927bc8dcd576a40d0a7ecd072203710a818e10c5 not found: ID does not exist" containerID="04b4da4c7cdb199c83e91cbd927bc8dcd576a40d0a7ecd072203710a818e10c5" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.077004 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04b4da4c7cdb199c83e91cbd927bc8dcd576a40d0a7ecd072203710a818e10c5"} err="failed to get container status \"04b4da4c7cdb199c83e91cbd927bc8dcd576a40d0a7ecd072203710a818e10c5\": rpc error: code = NotFound desc = could not find container \"04b4da4c7cdb199c83e91cbd927bc8dcd576a40d0a7ecd072203710a818e10c5\": container with ID starting with 04b4da4c7cdb199c83e91cbd927bc8dcd576a40d0a7ecd072203710a818e10c5 not found: ID does not exist" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.078859 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.090151 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.097523 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 07:08:20 crc kubenswrapper[4842]: E0202 07:08:20.098123 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" containerName="nova-api-log" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.098157 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" containerName="nova-api-log" Feb 02 07:08:20 crc kubenswrapper[4842]: E0202 07:08:20.098179 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" containerName="nova-api-api" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.098191 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" containerName="nova-api-api" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.098561 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" containerName="nova-api-log" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.098606 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" containerName="nova-api-api" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.100176 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.106379 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.106434 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.106439 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.106489 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.186895 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-config-data\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.186956 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4a4e099-0255-49f4-bcb4-7962af32cad2-logs\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.186972 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc7g6\" (UniqueName: \"kubernetes.io/projected/b4a4e099-0255-49f4-bcb4-7962af32cad2-kube-api-access-bc7g6\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.187254 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.187303 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.187355 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-public-tls-certs\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.288363 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4a4e099-0255-49f4-bcb4-7962af32cad2-logs\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.288403 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc7g6\" (UniqueName: \"kubernetes.io/projected/b4a4e099-0255-49f4-bcb4-7962af32cad2-kube-api-access-bc7g6\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.288478 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.288494 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.288519 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-public-tls-certs\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.288566 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-config-data\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.289827 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4a4e099-0255-49f4-bcb4-7962af32cad2-logs\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.293956 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-public-tls-certs\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.296165 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-config-data\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.296385 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.310042 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.310826 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc7g6\" (UniqueName: \"kubernetes.io/projected/b4a4e099-0255-49f4-bcb4-7962af32cad2-kube-api-access-bc7g6\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.458633 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.684281 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.707764 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.968885 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:08:20 crc kubenswrapper[4842]: W0202 07:08:20.970123 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4a4e099_0255_49f4_bcb4_7962af32cad2.slice/crio-05bed553a9d1167fc6969d8d0d674b6850e5b78bc317f359dad785df3a643e85 WatchSource:0}: Error finding container 05bed553a9d1167fc6969d8d0d674b6850e5b78bc317f359dad785df3a643e85: Status 404 returned error can't find the container with id 05bed553a9d1167fc6969d8d0d674b6850e5b78bc317f359dad785df3a643e85 Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.013450 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"174fcd53-40ab-4d19-a317-bc5cd117d2a4","Type":"ContainerStarted","Data":"454fd5e306d51498a984d5077e2446e7c6cf9f4c21170f227c52179104c4a621"} Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.013490 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"174fcd53-40ab-4d19-a317-bc5cd117d2a4","Type":"ContainerStarted","Data":"dc072634ce1fdc7d7f270a2d47917083559fd131ffec946966f43f1f6581f8f4"} Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.015225 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4a4e099-0255-49f4-bcb4-7962af32cad2","Type":"ContainerStarted","Data":"05bed553a9d1167fc6969d8d0d674b6850e5b78bc317f359dad785df3a643e85"} Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.030318 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.293714 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-77gxn"] Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.295799 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-77gxn" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.300070 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.300651 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.312051 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-scripts\") pod \"nova-cell1-cell-mapping-77gxn\" (UID: \"38cfcc24-6854-414a-9d6c-4769e1366eb1\") " pod="openstack/nova-cell1-cell-mapping-77gxn" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.312101 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqs5l\" (UniqueName: \"kubernetes.io/projected/38cfcc24-6854-414a-9d6c-4769e1366eb1-kube-api-access-tqs5l\") pod \"nova-cell1-cell-mapping-77gxn\" (UID: \"38cfcc24-6854-414a-9d6c-4769e1366eb1\") " pod="openstack/nova-cell1-cell-mapping-77gxn" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.312162 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-77gxn\" (UID: \"38cfcc24-6854-414a-9d6c-4769e1366eb1\") " pod="openstack/nova-cell1-cell-mapping-77gxn" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.312267 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-config-data\") pod \"nova-cell1-cell-mapping-77gxn\" (UID: \"38cfcc24-6854-414a-9d6c-4769e1366eb1\") " pod="openstack/nova-cell1-cell-mapping-77gxn" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.313186 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-77gxn"] Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.413700 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-config-data\") pod \"nova-cell1-cell-mapping-77gxn\" (UID: \"38cfcc24-6854-414a-9d6c-4769e1366eb1\") " pod="openstack/nova-cell1-cell-mapping-77gxn" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.413766 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-scripts\") pod \"nova-cell1-cell-mapping-77gxn\" (UID: \"38cfcc24-6854-414a-9d6c-4769e1366eb1\") " pod="openstack/nova-cell1-cell-mapping-77gxn" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.413806 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqs5l\" (UniqueName: \"kubernetes.io/projected/38cfcc24-6854-414a-9d6c-4769e1366eb1-kube-api-access-tqs5l\") pod \"nova-cell1-cell-mapping-77gxn\" (UID: \"38cfcc24-6854-414a-9d6c-4769e1366eb1\") " pod="openstack/nova-cell1-cell-mapping-77gxn" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.413842 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-77gxn\" (UID: \"38cfcc24-6854-414a-9d6c-4769e1366eb1\") " pod="openstack/nova-cell1-cell-mapping-77gxn" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.420724 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-config-data\") pod \"nova-cell1-cell-mapping-77gxn\" (UID: \"38cfcc24-6854-414a-9d6c-4769e1366eb1\") " pod="openstack/nova-cell1-cell-mapping-77gxn" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.421683 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-scripts\") pod \"nova-cell1-cell-mapping-77gxn\" (UID: \"38cfcc24-6854-414a-9d6c-4769e1366eb1\") " pod="openstack/nova-cell1-cell-mapping-77gxn" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.421694 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-77gxn\" (UID: \"38cfcc24-6854-414a-9d6c-4769e1366eb1\") " pod="openstack/nova-cell1-cell-mapping-77gxn" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.430957 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqs5l\" (UniqueName: \"kubernetes.io/projected/38cfcc24-6854-414a-9d6c-4769e1366eb1-kube-api-access-tqs5l\") pod \"nova-cell1-cell-mapping-77gxn\" (UID: \"38cfcc24-6854-414a-9d6c-4769e1366eb1\") " pod="openstack/nova-cell1-cell-mapping-77gxn" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.444160 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" path="/var/lib/kubelet/pods/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2/volumes" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.673128 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-77gxn" Feb 02 07:08:22 crc kubenswrapper[4842]: I0202 07:08:22.024464 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"174fcd53-40ab-4d19-a317-bc5cd117d2a4","Type":"ContainerStarted","Data":"b1e2b0db828452447ced8622fe6dcff41213b22d66d8c13c96258aefe2a29db1"} Feb 02 07:08:22 crc kubenswrapper[4842]: I0202 07:08:22.026544 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4a4e099-0255-49f4-bcb4-7962af32cad2","Type":"ContainerStarted","Data":"89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452"} Feb 02 07:08:22 crc kubenswrapper[4842]: I0202 07:08:22.026602 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4a4e099-0255-49f4-bcb4-7962af32cad2","Type":"ContainerStarted","Data":"c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4"} Feb 02 07:08:22 crc kubenswrapper[4842]: I0202 07:08:22.043511 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.043497107 podStartE2EDuration="2.043497107s" podCreationTimestamp="2026-02-02 07:08:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:08:22.041888097 +0000 UTC m=+1327.419156029" watchObservedRunningTime="2026-02-02 07:08:22.043497107 +0000 UTC m=+1327.420765019" Feb 02 07:08:22 crc kubenswrapper[4842]: I0202 07:08:22.140185 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-77gxn"] Feb 02 07:08:22 crc kubenswrapper[4842]: W0202 07:08:22.142343 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38cfcc24_6854_414a_9d6c_4769e1366eb1.slice/crio-80b83a98f26a6e2e866312dd7c5fab8dc991b4d5d03904f45c846c25a98dd4ce WatchSource:0}: Error finding container 80b83a98f26a6e2e866312dd7c5fab8dc991b4d5d03904f45c846c25a98dd4ce: Status 404 returned error can't find the container with id 80b83a98f26a6e2e866312dd7c5fab8dc991b4d5d03904f45c846c25a98dd4ce Feb 02 07:08:23 crc kubenswrapper[4842]: I0202 07:08:23.039708 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"174fcd53-40ab-4d19-a317-bc5cd117d2a4","Type":"ContainerStarted","Data":"4bae417047baf6bf846e8de15338ba7207499db97e8d990c0e70145588c621ef"} Feb 02 07:08:23 crc kubenswrapper[4842]: I0202 07:08:23.043605 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-77gxn" event={"ID":"38cfcc24-6854-414a-9d6c-4769e1366eb1","Type":"ContainerStarted","Data":"999eacbb47149d7ff50ad4df7698189fd41e6e1be3e25e8c83a58d8439abc53c"} Feb 02 07:08:23 crc kubenswrapper[4842]: I0202 07:08:23.043654 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-77gxn" event={"ID":"38cfcc24-6854-414a-9d6c-4769e1366eb1","Type":"ContainerStarted","Data":"80b83a98f26a6e2e866312dd7c5fab8dc991b4d5d03904f45c846c25a98dd4ce"} Feb 02 07:08:23 crc kubenswrapper[4842]: I0202 07:08:23.066301 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-77gxn" podStartSLOduration=2.066285682 podStartE2EDuration="2.066285682s" podCreationTimestamp="2026-02-02 07:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:08:23.064080668 +0000 UTC m=+1328.441348590" watchObservedRunningTime="2026-02-02 07:08:23.066285682 +0000 UTC m=+1328.443553594" Feb 02 07:08:23 crc kubenswrapper[4842]: I0202 07:08:23.421339 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:23 crc kubenswrapper[4842]: I0202 07:08:23.525649 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-8rcz9"] Feb 02 07:08:23 crc kubenswrapper[4842]: I0202 07:08:23.525890 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" podUID="9e447f46-c8cc-42f2-92e6-1465a9f407c6" containerName="dnsmasq-dns" containerID="cri-o://5f6dabb3b7c34feb5a2123ac9fa2eb87a3cf03a3caf3efd65fb72c179cb7cd52" gracePeriod=10 Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.053870 4842 generic.go:334] "Generic (PLEG): container finished" podID="9e447f46-c8cc-42f2-92e6-1465a9f407c6" containerID="5f6dabb3b7c34feb5a2123ac9fa2eb87a3cf03a3caf3efd65fb72c179cb7cd52" exitCode=0 Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.053904 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" event={"ID":"9e447f46-c8cc-42f2-92e6-1465a9f407c6","Type":"ContainerDied","Data":"5f6dabb3b7c34feb5a2123ac9fa2eb87a3cf03a3caf3efd65fb72c179cb7cd52"} Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.054246 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" event={"ID":"9e447f46-c8cc-42f2-92e6-1465a9f407c6","Type":"ContainerDied","Data":"451377c79842f0376185bd4f8a1618a4b5a16afcc7be3c0724fb62e157fb3755"} Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.054282 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="451377c79842f0376185bd4f8a1618a4b5a16afcc7be3c0724fb62e157fb3755" Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.130744 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.270722 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-ovsdbserver-nb\") pod \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.271001 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-ovsdbserver-sb\") pod \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.271032 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55tx7\" (UniqueName: \"kubernetes.io/projected/9e447f46-c8cc-42f2-92e6-1465a9f407c6-kube-api-access-55tx7\") pod \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.271147 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-config\") pod \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.271198 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-dns-swift-storage-0\") pod \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.271288 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-dns-svc\") pod \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.275053 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e447f46-c8cc-42f2-92e6-1465a9f407c6-kube-api-access-55tx7" (OuterVolumeSpecName: "kube-api-access-55tx7") pod "9e447f46-c8cc-42f2-92e6-1465a9f407c6" (UID: "9e447f46-c8cc-42f2-92e6-1465a9f407c6"). InnerVolumeSpecName "kube-api-access-55tx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.317920 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9e447f46-c8cc-42f2-92e6-1465a9f407c6" (UID: "9e447f46-c8cc-42f2-92e6-1465a9f407c6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.324408 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9e447f46-c8cc-42f2-92e6-1465a9f407c6" (UID: "9e447f46-c8cc-42f2-92e6-1465a9f407c6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.330659 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9e447f46-c8cc-42f2-92e6-1465a9f407c6" (UID: "9e447f46-c8cc-42f2-92e6-1465a9f407c6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.342666 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9e447f46-c8cc-42f2-92e6-1465a9f407c6" (UID: "9e447f46-c8cc-42f2-92e6-1465a9f407c6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.343122 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-config" (OuterVolumeSpecName: "config") pod "9e447f46-c8cc-42f2-92e6-1465a9f407c6" (UID: "9e447f46-c8cc-42f2-92e6-1465a9f407c6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.373673 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.373704 4842 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.373716 4842 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.373725 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.373748 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.373756 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55tx7\" (UniqueName: \"kubernetes.io/projected/9e447f46-c8cc-42f2-92e6-1465a9f407c6-kube-api-access-55tx7\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:25 crc kubenswrapper[4842]: I0202 07:08:25.085852 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:08:25 crc kubenswrapper[4842]: I0202 07:08:25.085981 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"174fcd53-40ab-4d19-a317-bc5cd117d2a4","Type":"ContainerStarted","Data":"bad70e2dba666c009e7972d01ff11c1b18b18e47b07343dcd24db229c935fcc3"} Feb 02 07:08:25 crc kubenswrapper[4842]: I0202 07:08:25.114090 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.2417673320000002 podStartE2EDuration="6.114073351s" podCreationTimestamp="2026-02-02 07:08:19 +0000 UTC" firstStartedPulling="2026-02-02 07:08:20.080141552 +0000 UTC m=+1325.457409504" lastFinishedPulling="2026-02-02 07:08:23.952447611 +0000 UTC m=+1329.329715523" observedRunningTime="2026-02-02 07:08:25.107762204 +0000 UTC m=+1330.485030126" watchObservedRunningTime="2026-02-02 07:08:25.114073351 +0000 UTC m=+1330.491341253" Feb 02 07:08:25 crc kubenswrapper[4842]: I0202 07:08:25.132981 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-8rcz9"] Feb 02 07:08:25 crc kubenswrapper[4842]: I0202 07:08:25.140936 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-8rcz9"] Feb 02 07:08:25 crc kubenswrapper[4842]: I0202 07:08:25.447981 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e447f46-c8cc-42f2-92e6-1465a9f407c6" path="/var/lib/kubelet/pods/9e447f46-c8cc-42f2-92e6-1465a9f407c6/volumes" Feb 02 07:08:26 crc kubenswrapper[4842]: I0202 07:08:26.097649 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 07:08:28 crc kubenswrapper[4842]: I0202 07:08:28.121773 4842 generic.go:334] "Generic (PLEG): container finished" podID="38cfcc24-6854-414a-9d6c-4769e1366eb1" containerID="999eacbb47149d7ff50ad4df7698189fd41e6e1be3e25e8c83a58d8439abc53c" exitCode=0 Feb 02 07:08:28 crc kubenswrapper[4842]: I0202 07:08:28.122137 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-77gxn" event={"ID":"38cfcc24-6854-414a-9d6c-4769e1366eb1","Type":"ContainerDied","Data":"999eacbb47149d7ff50ad4df7698189fd41e6e1be3e25e8c83a58d8439abc53c"} Feb 02 07:08:28 crc kubenswrapper[4842]: I0202 07:08:28.917580 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" podUID="9e447f46-c8cc-42f2-92e6-1465a9f407c6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.190:5353: i/o timeout" Feb 02 07:08:29 crc kubenswrapper[4842]: I0202 07:08:29.458264 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-77gxn" Feb 02 07:08:29 crc kubenswrapper[4842]: I0202 07:08:29.598066 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-scripts\") pod \"38cfcc24-6854-414a-9d6c-4769e1366eb1\" (UID: \"38cfcc24-6854-414a-9d6c-4769e1366eb1\") " Feb 02 07:08:29 crc kubenswrapper[4842]: I0202 07:08:29.598649 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqs5l\" (UniqueName: \"kubernetes.io/projected/38cfcc24-6854-414a-9d6c-4769e1366eb1-kube-api-access-tqs5l\") pod \"38cfcc24-6854-414a-9d6c-4769e1366eb1\" (UID: \"38cfcc24-6854-414a-9d6c-4769e1366eb1\") " Feb 02 07:08:29 crc kubenswrapper[4842]: I0202 07:08:29.598911 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-config-data\") pod \"38cfcc24-6854-414a-9d6c-4769e1366eb1\" (UID: \"38cfcc24-6854-414a-9d6c-4769e1366eb1\") " Feb 02 07:08:29 crc kubenswrapper[4842]: I0202 07:08:29.599134 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-combined-ca-bundle\") pod \"38cfcc24-6854-414a-9d6c-4769e1366eb1\" (UID: \"38cfcc24-6854-414a-9d6c-4769e1366eb1\") " Feb 02 07:08:29 crc kubenswrapper[4842]: I0202 07:08:29.606322 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38cfcc24-6854-414a-9d6c-4769e1366eb1-kube-api-access-tqs5l" (OuterVolumeSpecName: "kube-api-access-tqs5l") pod "38cfcc24-6854-414a-9d6c-4769e1366eb1" (UID: "38cfcc24-6854-414a-9d6c-4769e1366eb1"). InnerVolumeSpecName "kube-api-access-tqs5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:08:29 crc kubenswrapper[4842]: I0202 07:08:29.607483 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-scripts" (OuterVolumeSpecName: "scripts") pod "38cfcc24-6854-414a-9d6c-4769e1366eb1" (UID: "38cfcc24-6854-414a-9d6c-4769e1366eb1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:29 crc kubenswrapper[4842]: I0202 07:08:29.647693 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38cfcc24-6854-414a-9d6c-4769e1366eb1" (UID: "38cfcc24-6854-414a-9d6c-4769e1366eb1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:29 crc kubenswrapper[4842]: I0202 07:08:29.656530 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-config-data" (OuterVolumeSpecName: "config-data") pod "38cfcc24-6854-414a-9d6c-4769e1366eb1" (UID: "38cfcc24-6854-414a-9d6c-4769e1366eb1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:29 crc kubenswrapper[4842]: I0202 07:08:29.701890 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqs5l\" (UniqueName: \"kubernetes.io/projected/38cfcc24-6854-414a-9d6c-4769e1366eb1-kube-api-access-tqs5l\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:29 crc kubenswrapper[4842]: I0202 07:08:29.701932 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:29 crc kubenswrapper[4842]: I0202 07:08:29.701946 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:29 crc kubenswrapper[4842]: I0202 07:08:29.701955 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:30 crc kubenswrapper[4842]: I0202 07:08:30.147340 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-77gxn" event={"ID":"38cfcc24-6854-414a-9d6c-4769e1366eb1","Type":"ContainerDied","Data":"80b83a98f26a6e2e866312dd7c5fab8dc991b4d5d03904f45c846c25a98dd4ce"} Feb 02 07:08:30 crc kubenswrapper[4842]: I0202 07:08:30.147396 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80b83a98f26a6e2e866312dd7c5fab8dc991b4d5d03904f45c846c25a98dd4ce" Feb 02 07:08:30 crc kubenswrapper[4842]: I0202 07:08:30.147430 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-77gxn" Feb 02 07:08:30 crc kubenswrapper[4842]: I0202 07:08:30.368676 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:08:30 crc kubenswrapper[4842]: I0202 07:08:30.369268 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b4a4e099-0255-49f4-bcb4-7962af32cad2" containerName="nova-api-log" containerID="cri-o://c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4" gracePeriod=30 Feb 02 07:08:30 crc kubenswrapper[4842]: I0202 07:08:30.369306 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b4a4e099-0255-49f4-bcb4-7962af32cad2" containerName="nova-api-api" containerID="cri-o://89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452" gracePeriod=30 Feb 02 07:08:30 crc kubenswrapper[4842]: I0202 07:08:30.385388 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:08:30 crc kubenswrapper[4842]: I0202 07:08:30.385573 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="46ba09a5-eecd-46b6-9182-96444c6de570" containerName="nova-scheduler-scheduler" containerID="cri-o://fafeb3817a31a7a0fb62f345433970bfd99201eb46a5c80f3211d7f7e964cd2c" gracePeriod=30 Feb 02 07:08:30 crc kubenswrapper[4842]: I0202 07:08:30.444470 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:08:30 crc kubenswrapper[4842]: I0202 07:08:30.446131 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ec1cba88-8c9f-48bb-91fc-fc7675bba29a" containerName="nova-metadata-metadata" containerID="cri-o://582a5dd3542b08360b5bb369e0ddd50ae9403ee0b66668c8d7e065b109baa6aa" gracePeriod=30 Feb 02 07:08:30 crc kubenswrapper[4842]: I0202 07:08:30.446370 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ec1cba88-8c9f-48bb-91fc-fc7675bba29a" containerName="nova-metadata-log" containerID="cri-o://e9568e435718a90b20e25e9432be05f2885e29c1c8378fa536932ac94aabd5f1" gracePeriod=30 Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.149933 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.156365 4842 generic.go:334] "Generic (PLEG): container finished" podID="b4a4e099-0255-49f4-bcb4-7962af32cad2" containerID="89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452" exitCode=0 Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.156406 4842 generic.go:334] "Generic (PLEG): container finished" podID="b4a4e099-0255-49f4-bcb4-7962af32cad2" containerID="c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4" exitCode=143 Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.156428 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.156452 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4a4e099-0255-49f4-bcb4-7962af32cad2","Type":"ContainerDied","Data":"89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452"} Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.156497 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4a4e099-0255-49f4-bcb4-7962af32cad2","Type":"ContainerDied","Data":"c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4"} Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.156509 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4a4e099-0255-49f4-bcb4-7962af32cad2","Type":"ContainerDied","Data":"05bed553a9d1167fc6969d8d0d674b6850e5b78bc317f359dad785df3a643e85"} Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.156525 4842 scope.go:117] "RemoveContainer" containerID="89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.158684 4842 generic.go:334] "Generic (PLEG): container finished" podID="ec1cba88-8c9f-48bb-91fc-fc7675bba29a" containerID="e9568e435718a90b20e25e9432be05f2885e29c1c8378fa536932ac94aabd5f1" exitCode=143 Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.158711 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec1cba88-8c9f-48bb-91fc-fc7675bba29a","Type":"ContainerDied","Data":"e9568e435718a90b20e25e9432be05f2885e29c1c8378fa536932ac94aabd5f1"} Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.195627 4842 scope.go:117] "RemoveContainer" containerID="c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.216978 4842 scope.go:117] "RemoveContainer" containerID="89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452" Feb 02 07:08:31 crc kubenswrapper[4842]: E0202 07:08:31.217494 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452\": container with ID starting with 89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452 not found: ID does not exist" containerID="89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.217538 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452"} err="failed to get container status \"89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452\": rpc error: code = NotFound desc = could not find container \"89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452\": container with ID starting with 89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452 not found: ID does not exist" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.217571 4842 scope.go:117] "RemoveContainer" containerID="c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4" Feb 02 07:08:31 crc kubenswrapper[4842]: E0202 07:08:31.218130 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4\": container with ID starting with c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4 not found: ID does not exist" containerID="c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.218156 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4"} err="failed to get container status \"c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4\": rpc error: code = NotFound desc = could not find container \"c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4\": container with ID starting with c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4 not found: ID does not exist" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.218171 4842 scope.go:117] "RemoveContainer" containerID="89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.218546 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452"} err="failed to get container status \"89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452\": rpc error: code = NotFound desc = could not find container \"89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452\": container with ID starting with 89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452 not found: ID does not exist" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.218564 4842 scope.go:117] "RemoveContainer" containerID="c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.218817 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4"} err="failed to get container status \"c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4\": rpc error: code = NotFound desc = could not find container \"c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4\": container with ID starting with c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4 not found: ID does not exist" Feb 02 07:08:31 crc kubenswrapper[4842]: E0202 07:08:31.303025 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fafeb3817a31a7a0fb62f345433970bfd99201eb46a5c80f3211d7f7e964cd2c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 07:08:31 crc kubenswrapper[4842]: E0202 07:08:31.304322 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fafeb3817a31a7a0fb62f345433970bfd99201eb46a5c80f3211d7f7e964cd2c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 07:08:31 crc kubenswrapper[4842]: E0202 07:08:31.305709 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fafeb3817a31a7a0fb62f345433970bfd99201eb46a5c80f3211d7f7e964cd2c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 07:08:31 crc kubenswrapper[4842]: E0202 07:08:31.305795 4842 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="46ba09a5-eecd-46b6-9182-96444c6de570" containerName="nova-scheduler-scheduler" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.331802 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-public-tls-certs\") pod \"b4a4e099-0255-49f4-bcb4-7962af32cad2\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.331906 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4a4e099-0255-49f4-bcb4-7962af32cad2-logs\") pod \"b4a4e099-0255-49f4-bcb4-7962af32cad2\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.331987 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-config-data\") pod \"b4a4e099-0255-49f4-bcb4-7962af32cad2\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.332018 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc7g6\" (UniqueName: \"kubernetes.io/projected/b4a4e099-0255-49f4-bcb4-7962af32cad2-kube-api-access-bc7g6\") pod \"b4a4e099-0255-49f4-bcb4-7962af32cad2\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.332040 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-internal-tls-certs\") pod \"b4a4e099-0255-49f4-bcb4-7962af32cad2\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.332096 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-combined-ca-bundle\") pod \"b4a4e099-0255-49f4-bcb4-7962af32cad2\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.332838 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4a4e099-0255-49f4-bcb4-7962af32cad2-logs" (OuterVolumeSpecName: "logs") pod "b4a4e099-0255-49f4-bcb4-7962af32cad2" (UID: "b4a4e099-0255-49f4-bcb4-7962af32cad2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.338977 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4a4e099-0255-49f4-bcb4-7962af32cad2-kube-api-access-bc7g6" (OuterVolumeSpecName: "kube-api-access-bc7g6") pod "b4a4e099-0255-49f4-bcb4-7962af32cad2" (UID: "b4a4e099-0255-49f4-bcb4-7962af32cad2"). InnerVolumeSpecName "kube-api-access-bc7g6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.380014 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-config-data" (OuterVolumeSpecName: "config-data") pod "b4a4e099-0255-49f4-bcb4-7962af32cad2" (UID: "b4a4e099-0255-49f4-bcb4-7962af32cad2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.382871 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4a4e099-0255-49f4-bcb4-7962af32cad2" (UID: "b4a4e099-0255-49f4-bcb4-7962af32cad2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.395244 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b4a4e099-0255-49f4-bcb4-7962af32cad2" (UID: "b4a4e099-0255-49f4-bcb4-7962af32cad2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.402065 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b4a4e099-0255-49f4-bcb4-7962af32cad2" (UID: "b4a4e099-0255-49f4-bcb4-7962af32cad2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.434253 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4a4e099-0255-49f4-bcb4-7962af32cad2-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.434290 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.434304 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc7g6\" (UniqueName: \"kubernetes.io/projected/b4a4e099-0255-49f4-bcb4-7962af32cad2-kube-api-access-bc7g6\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.434318 4842 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.434330 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.434343 4842 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.498743 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.521027 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.526849 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 07:08:31 crc kubenswrapper[4842]: E0202 07:08:31.527352 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4a4e099-0255-49f4-bcb4-7962af32cad2" containerName="nova-api-log" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.527372 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4a4e099-0255-49f4-bcb4-7962af32cad2" containerName="nova-api-log" Feb 02 07:08:31 crc kubenswrapper[4842]: E0202 07:08:31.527397 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e447f46-c8cc-42f2-92e6-1465a9f407c6" containerName="dnsmasq-dns" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.527409 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e447f46-c8cc-42f2-92e6-1465a9f407c6" containerName="dnsmasq-dns" Feb 02 07:08:31 crc kubenswrapper[4842]: E0202 07:08:31.527425 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4a4e099-0255-49f4-bcb4-7962af32cad2" containerName="nova-api-api" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.527433 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4a4e099-0255-49f4-bcb4-7962af32cad2" containerName="nova-api-api" Feb 02 07:08:31 crc kubenswrapper[4842]: E0202 07:08:31.527451 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38cfcc24-6854-414a-9d6c-4769e1366eb1" containerName="nova-manage" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.527460 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="38cfcc24-6854-414a-9d6c-4769e1366eb1" containerName="nova-manage" Feb 02 07:08:31 crc kubenswrapper[4842]: E0202 07:08:31.527480 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e447f46-c8cc-42f2-92e6-1465a9f407c6" containerName="init" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.527488 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e447f46-c8cc-42f2-92e6-1465a9f407c6" containerName="init" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.527718 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4a4e099-0255-49f4-bcb4-7962af32cad2" containerName="nova-api-log" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.527744 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="38cfcc24-6854-414a-9d6c-4769e1366eb1" containerName="nova-manage" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.527758 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4a4e099-0255-49f4-bcb4-7962af32cad2" containerName="nova-api-api" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.527774 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e447f46-c8cc-42f2-92e6-1465a9f407c6" containerName="dnsmasq-dns" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.528955 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.533460 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.539760 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.539944 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.540159 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.640918 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh8lx\" (UniqueName: \"kubernetes.io/projected/25609b1c-e1e9-4633-b3e3-93bd2f4396de-kube-api-access-nh8lx\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.641191 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25609b1c-e1e9-4633-b3e3-93bd2f4396de-logs\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.641432 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-config-data\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.641587 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-internal-tls-certs\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.641696 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.641797 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-public-tls-certs\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.743771 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-config-data\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.744072 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-internal-tls-certs\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.744188 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.744347 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-public-tls-certs\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.744445 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh8lx\" (UniqueName: \"kubernetes.io/projected/25609b1c-e1e9-4633-b3e3-93bd2f4396de-kube-api-access-nh8lx\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.744514 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25609b1c-e1e9-4633-b3e3-93bd2f4396de-logs\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.745290 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25609b1c-e1e9-4633-b3e3-93bd2f4396de-logs\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.748501 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.748521 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-config-data\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.753135 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-internal-tls-certs\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.753894 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-public-tls-certs\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.772919 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh8lx\" (UniqueName: \"kubernetes.io/projected/25609b1c-e1e9-4633-b3e3-93bd2f4396de-kube-api-access-nh8lx\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.874575 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 07:08:32 crc kubenswrapper[4842]: I0202 07:08:32.400417 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:08:33 crc kubenswrapper[4842]: I0202 07:08:33.186641 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"25609b1c-e1e9-4633-b3e3-93bd2f4396de","Type":"ContainerStarted","Data":"bebe8c74ad90a2dc028ad9e30942ced9f67c8af8df16026b5b89379d97e80e00"} Feb 02 07:08:33 crc kubenswrapper[4842]: I0202 07:08:33.188066 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"25609b1c-e1e9-4633-b3e3-93bd2f4396de","Type":"ContainerStarted","Data":"1f08602808f0c1da9b996db624f132bc20c5b91004db8c9c6f2ffa67741d3bbc"} Feb 02 07:08:33 crc kubenswrapper[4842]: I0202 07:08:33.188196 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"25609b1c-e1e9-4633-b3e3-93bd2f4396de","Type":"ContainerStarted","Data":"22718259310cd947182a28b08951d593ee087b709a27af6ee23d9b940e93c5ac"} Feb 02 07:08:33 crc kubenswrapper[4842]: I0202 07:08:33.218517 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.2184964 podStartE2EDuration="2.2184964s" podCreationTimestamp="2026-02-02 07:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:08:33.213077025 +0000 UTC m=+1338.590344937" watchObservedRunningTime="2026-02-02 07:08:33.2184964 +0000 UTC m=+1338.595764312" Feb 02 07:08:33 crc kubenswrapper[4842]: I0202 07:08:33.451327 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4a4e099-0255-49f4-bcb4-7962af32cad2" path="/var/lib/kubelet/pods/b4a4e099-0255-49f4-bcb4-7962af32cad2/volumes" Feb 02 07:08:33 crc kubenswrapper[4842]: I0202 07:08:33.910851 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ec1cba88-8c9f-48bb-91fc-fc7675bba29a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": read tcp 10.217.0.2:49254->10.217.0.193:8775: read: connection reset by peer" Feb 02 07:08:33 crc kubenswrapper[4842]: I0202 07:08:33.910851 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ec1cba88-8c9f-48bb-91fc-fc7675bba29a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": read tcp 10.217.0.2:49270->10.217.0.193:8775: read: connection reset by peer" Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.196337 4842 generic.go:334] "Generic (PLEG): container finished" podID="ec1cba88-8c9f-48bb-91fc-fc7675bba29a" containerID="582a5dd3542b08360b5bb369e0ddd50ae9403ee0b66668c8d7e065b109baa6aa" exitCode=0 Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.196453 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec1cba88-8c9f-48bb-91fc-fc7675bba29a","Type":"ContainerDied","Data":"582a5dd3542b08360b5bb369e0ddd50ae9403ee0b66668c8d7e065b109baa6aa"} Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.417669 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.601786 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-nova-metadata-tls-certs\") pod \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.601933 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-config-data\") pod \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.602889 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-logs\") pod \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.602926 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xwxg\" (UniqueName: \"kubernetes.io/projected/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-kube-api-access-8xwxg\") pod \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.602955 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-combined-ca-bundle\") pod \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.603324 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-logs" (OuterVolumeSpecName: "logs") pod "ec1cba88-8c9f-48bb-91fc-fc7675bba29a" (UID: "ec1cba88-8c9f-48bb-91fc-fc7675bba29a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.603701 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.609365 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-kube-api-access-8xwxg" (OuterVolumeSpecName: "kube-api-access-8xwxg") pod "ec1cba88-8c9f-48bb-91fc-fc7675bba29a" (UID: "ec1cba88-8c9f-48bb-91fc-fc7675bba29a"). InnerVolumeSpecName "kube-api-access-8xwxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.633467 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-config-data" (OuterVolumeSpecName: "config-data") pod "ec1cba88-8c9f-48bb-91fc-fc7675bba29a" (UID: "ec1cba88-8c9f-48bb-91fc-fc7675bba29a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.638704 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec1cba88-8c9f-48bb-91fc-fc7675bba29a" (UID: "ec1cba88-8c9f-48bb-91fc-fc7675bba29a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.676113 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ec1cba88-8c9f-48bb-91fc-fc7675bba29a" (UID: "ec1cba88-8c9f-48bb-91fc-fc7675bba29a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.705522 4842 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.705556 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.705565 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xwxg\" (UniqueName: \"kubernetes.io/projected/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-kube-api-access-8xwxg\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.705575 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.207793 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec1cba88-8c9f-48bb-91fc-fc7675bba29a","Type":"ContainerDied","Data":"a1edffd6229fcfd445e770ea5551a81134a2ceed05cbf411c15f38de72a6bfa9"} Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.207841 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.207860 4842 scope.go:117] "RemoveContainer" containerID="582a5dd3542b08360b5bb369e0ddd50ae9403ee0b66668c8d7e065b109baa6aa" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.209666 4842 generic.go:334] "Generic (PLEG): container finished" podID="46ba09a5-eecd-46b6-9182-96444c6de570" containerID="fafeb3817a31a7a0fb62f345433970bfd99201eb46a5c80f3211d7f7e964cd2c" exitCode=0 Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.210878 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"46ba09a5-eecd-46b6-9182-96444c6de570","Type":"ContainerDied","Data":"fafeb3817a31a7a0fb62f345433970bfd99201eb46a5c80f3211d7f7e964cd2c"} Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.240970 4842 scope.go:117] "RemoveContainer" containerID="e9568e435718a90b20e25e9432be05f2885e29c1c8378fa536932ac94aabd5f1" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.271054 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.282447 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.293727 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:08:35 crc kubenswrapper[4842]: E0202 07:08:35.294321 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec1cba88-8c9f-48bb-91fc-fc7675bba29a" containerName="nova-metadata-metadata" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.294335 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec1cba88-8c9f-48bb-91fc-fc7675bba29a" containerName="nova-metadata-metadata" Feb 02 07:08:35 crc kubenswrapper[4842]: E0202 07:08:35.294355 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec1cba88-8c9f-48bb-91fc-fc7675bba29a" containerName="nova-metadata-log" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.294362 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec1cba88-8c9f-48bb-91fc-fc7675bba29a" containerName="nova-metadata-log" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.294605 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec1cba88-8c9f-48bb-91fc-fc7675bba29a" containerName="nova-metadata-log" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.294628 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec1cba88-8c9f-48bb-91fc-fc7675bba29a" containerName="nova-metadata-metadata" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.295803 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.298066 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.298262 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.303773 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.421558 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz5c2\" (UniqueName: \"kubernetes.io/projected/54aa018a-3e7e-4c95-9c1d-387543ed5af0-kube-api-access-kz5c2\") pod \"nova-metadata-0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.421859 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.421913 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-config-data\") pod \"nova-metadata-0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.421949 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54aa018a-3e7e-4c95-9c1d-387543ed5af0-logs\") pod \"nova-metadata-0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.421979 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.445937 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec1cba88-8c9f-48bb-91fc-fc7675bba29a" path="/var/lib/kubelet/pods/ec1cba88-8c9f-48bb-91fc-fc7675bba29a/volumes" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.478558 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.523820 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-config-data\") pod \"nova-metadata-0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.523900 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54aa018a-3e7e-4c95-9c1d-387543ed5af0-logs\") pod \"nova-metadata-0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.523942 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.524009 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz5c2\" (UniqueName: \"kubernetes.io/projected/54aa018a-3e7e-4c95-9c1d-387543ed5af0-kube-api-access-kz5c2\") pod \"nova-metadata-0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.524092 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.524405 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54aa018a-3e7e-4c95-9c1d-387543ed5af0-logs\") pod \"nova-metadata-0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.529557 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-config-data\") pod \"nova-metadata-0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.529606 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.532849 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.544207 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz5c2\" (UniqueName: \"kubernetes.io/projected/54aa018a-3e7e-4c95-9c1d-387543ed5af0-kube-api-access-kz5c2\") pod \"nova-metadata-0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.609758 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.624797 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ba09a5-eecd-46b6-9182-96444c6de570-combined-ca-bundle\") pod \"46ba09a5-eecd-46b6-9182-96444c6de570\" (UID: \"46ba09a5-eecd-46b6-9182-96444c6de570\") " Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.624905 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtj28\" (UniqueName: \"kubernetes.io/projected/46ba09a5-eecd-46b6-9182-96444c6de570-kube-api-access-jtj28\") pod \"46ba09a5-eecd-46b6-9182-96444c6de570\" (UID: \"46ba09a5-eecd-46b6-9182-96444c6de570\") " Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.624977 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46ba09a5-eecd-46b6-9182-96444c6de570-config-data\") pod \"46ba09a5-eecd-46b6-9182-96444c6de570\" (UID: \"46ba09a5-eecd-46b6-9182-96444c6de570\") " Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.628603 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46ba09a5-eecd-46b6-9182-96444c6de570-kube-api-access-jtj28" (OuterVolumeSpecName: "kube-api-access-jtj28") pod "46ba09a5-eecd-46b6-9182-96444c6de570" (UID: "46ba09a5-eecd-46b6-9182-96444c6de570"). InnerVolumeSpecName "kube-api-access-jtj28". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.681507 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46ba09a5-eecd-46b6-9182-96444c6de570-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46ba09a5-eecd-46b6-9182-96444c6de570" (UID: "46ba09a5-eecd-46b6-9182-96444c6de570"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.681525 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46ba09a5-eecd-46b6-9182-96444c6de570-config-data" (OuterVolumeSpecName: "config-data") pod "46ba09a5-eecd-46b6-9182-96444c6de570" (UID: "46ba09a5-eecd-46b6-9182-96444c6de570"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.728680 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ba09a5-eecd-46b6-9182-96444c6de570-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.729573 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtj28\" (UniqueName: \"kubernetes.io/projected/46ba09a5-eecd-46b6-9182-96444c6de570-kube-api-access-jtj28\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.729607 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46ba09a5-eecd-46b6-9182-96444c6de570-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.081549 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.224358 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"46ba09a5-eecd-46b6-9182-96444c6de570","Type":"ContainerDied","Data":"968efa1fb3cd3082b0218178700a10a30e92c9574cb73ef9bff028ccdf092975"} Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.224440 4842 scope.go:117] "RemoveContainer" containerID="fafeb3817a31a7a0fb62f345433970bfd99201eb46a5c80f3211d7f7e964cd2c" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.224474 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.226283 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"54aa018a-3e7e-4c95-9c1d-387543ed5af0","Type":"ContainerStarted","Data":"97d85497136bca54efa2ce8c8d3033b9016ab0e739dcabcdf04a8ad306a7c1b7"} Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.289619 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.307910 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.367781 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:08:36 crc kubenswrapper[4842]: E0202 07:08:36.368531 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46ba09a5-eecd-46b6-9182-96444c6de570" containerName="nova-scheduler-scheduler" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.368556 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="46ba09a5-eecd-46b6-9182-96444c6de570" containerName="nova-scheduler-scheduler" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.368743 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="46ba09a5-eecd-46b6-9182-96444c6de570" containerName="nova-scheduler-scheduler" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.369525 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.371998 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.389772 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.542807 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-config-data\") pod \"nova-scheduler-0\" (UID: \"1f94c60e-a4fc-4b7d-96cd-367d46a731c4\") " pod="openstack/nova-scheduler-0" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.542990 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1f94c60e-a4fc-4b7d-96cd-367d46a731c4\") " pod="openstack/nova-scheduler-0" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.543188 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k69gq\" (UniqueName: \"kubernetes.io/projected/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-kube-api-access-k69gq\") pod \"nova-scheduler-0\" (UID: \"1f94c60e-a4fc-4b7d-96cd-367d46a731c4\") " pod="openstack/nova-scheduler-0" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.644728 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k69gq\" (UniqueName: \"kubernetes.io/projected/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-kube-api-access-k69gq\") pod \"nova-scheduler-0\" (UID: \"1f94c60e-a4fc-4b7d-96cd-367d46a731c4\") " pod="openstack/nova-scheduler-0" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.644910 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-config-data\") pod \"nova-scheduler-0\" (UID: \"1f94c60e-a4fc-4b7d-96cd-367d46a731c4\") " pod="openstack/nova-scheduler-0" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.644951 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1f94c60e-a4fc-4b7d-96cd-367d46a731c4\") " pod="openstack/nova-scheduler-0" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.649909 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-config-data\") pod \"nova-scheduler-0\" (UID: \"1f94c60e-a4fc-4b7d-96cd-367d46a731c4\") " pod="openstack/nova-scheduler-0" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.649988 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1f94c60e-a4fc-4b7d-96cd-367d46a731c4\") " pod="openstack/nova-scheduler-0" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.671539 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k69gq\" (UniqueName: \"kubernetes.io/projected/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-kube-api-access-k69gq\") pod \"nova-scheduler-0\" (UID: \"1f94c60e-a4fc-4b7d-96cd-367d46a731c4\") " pod="openstack/nova-scheduler-0" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.720573 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 07:08:37 crc kubenswrapper[4842]: I0202 07:08:37.239699 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:08:37 crc kubenswrapper[4842]: I0202 07:08:37.242974 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"54aa018a-3e7e-4c95-9c1d-387543ed5af0","Type":"ContainerStarted","Data":"c6b2aef7c5907fec1f821bb206e985dfa1c10ebd9ed998f2f05ec13c6cf132ab"} Feb 02 07:08:37 crc kubenswrapper[4842]: I0202 07:08:37.243039 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"54aa018a-3e7e-4c95-9c1d-387543ed5af0","Type":"ContainerStarted","Data":"415d21f9580ea68e52aa649eacebbe3550d2da28410a54eb695a4a912d91fbdd"} Feb 02 07:08:37 crc kubenswrapper[4842]: W0202 07:08:37.243917 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f94c60e_a4fc_4b7d_96cd_367d46a731c4.slice/crio-95e75a79dbca9de8ff0edaf83bbf9a981efefb176ab75feebb5919ac4f34c81f WatchSource:0}: Error finding container 95e75a79dbca9de8ff0edaf83bbf9a981efefb176ab75feebb5919ac4f34c81f: Status 404 returned error can't find the container with id 95e75a79dbca9de8ff0edaf83bbf9a981efefb176ab75feebb5919ac4f34c81f Feb 02 07:08:37 crc kubenswrapper[4842]: I0202 07:08:37.276107 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.276081413 podStartE2EDuration="2.276081413s" podCreationTimestamp="2026-02-02 07:08:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:08:37.27235565 +0000 UTC m=+1342.649623572" watchObservedRunningTime="2026-02-02 07:08:37.276081413 +0000 UTC m=+1342.653349365" Feb 02 07:08:37 crc kubenswrapper[4842]: I0202 07:08:37.459832 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46ba09a5-eecd-46b6-9182-96444c6de570" path="/var/lib/kubelet/pods/46ba09a5-eecd-46b6-9182-96444c6de570/volumes" Feb 02 07:08:38 crc kubenswrapper[4842]: I0202 07:08:38.259876 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1f94c60e-a4fc-4b7d-96cd-367d46a731c4","Type":"ContainerStarted","Data":"aa3abfa94e116973782248416ac6de3799758150d193f7dbb95e6a13e34381cc"} Feb 02 07:08:38 crc kubenswrapper[4842]: I0202 07:08:38.259948 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1f94c60e-a4fc-4b7d-96cd-367d46a731c4","Type":"ContainerStarted","Data":"95e75a79dbca9de8ff0edaf83bbf9a981efefb176ab75feebb5919ac4f34c81f"} Feb 02 07:08:38 crc kubenswrapper[4842]: I0202 07:08:38.292153 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.292117472 podStartE2EDuration="2.292117472s" podCreationTimestamp="2026-02-02 07:08:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:08:38.281462728 +0000 UTC m=+1343.658730670" watchObservedRunningTime="2026-02-02 07:08:38.292117472 +0000 UTC m=+1343.669385394" Feb 02 07:08:40 crc kubenswrapper[4842]: I0202 07:08:40.609974 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 07:08:40 crc kubenswrapper[4842]: I0202 07:08:40.610533 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 07:08:41 crc kubenswrapper[4842]: I0202 07:08:41.721704 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 07:08:41 crc kubenswrapper[4842]: I0202 07:08:41.875824 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 07:08:41 crc kubenswrapper[4842]: I0202 07:08:41.875884 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 07:08:42 crc kubenswrapper[4842]: I0202 07:08:42.889344 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="25609b1c-e1e9-4633-b3e3-93bd2f4396de" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 07:08:42 crc kubenswrapper[4842]: I0202 07:08:42.889453 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="25609b1c-e1e9-4633-b3e3-93bd2f4396de" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 07:08:45 crc kubenswrapper[4842]: I0202 07:08:45.610620 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 07:08:45 crc kubenswrapper[4842]: I0202 07:08:45.611039 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 07:08:46 crc kubenswrapper[4842]: I0202 07:08:46.622444 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="54aa018a-3e7e-4c95-9c1d-387543ed5af0" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 07:08:46 crc kubenswrapper[4842]: I0202 07:08:46.622460 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="54aa018a-3e7e-4c95-9c1d-387543ed5af0" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 07:08:46 crc kubenswrapper[4842]: I0202 07:08:46.720879 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 07:08:46 crc kubenswrapper[4842]: I0202 07:08:46.765873 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 07:08:47 crc kubenswrapper[4842]: I0202 07:08:47.422096 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 07:08:49 crc kubenswrapper[4842]: I0202 07:08:49.592136 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 07:08:51 crc kubenswrapper[4842]: I0202 07:08:51.884970 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 07:08:51 crc kubenswrapper[4842]: I0202 07:08:51.886135 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 07:08:51 crc kubenswrapper[4842]: I0202 07:08:51.887869 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 07:08:51 crc kubenswrapper[4842]: I0202 07:08:51.896018 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 07:08:52 crc kubenswrapper[4842]: I0202 07:08:52.426726 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 07:08:52 crc kubenswrapper[4842]: I0202 07:08:52.442153 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 07:08:55 crc kubenswrapper[4842]: I0202 07:08:55.617020 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 07:08:55 crc kubenswrapper[4842]: I0202 07:08:55.620978 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 07:08:55 crc kubenswrapper[4842]: I0202 07:08:55.623769 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 07:08:56 crc kubenswrapper[4842]: I0202 07:08:56.497589 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.796581 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-kl9p2"] Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.806098 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kl9p2" Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.819399 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.828485 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kl9p2"] Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.829747 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b912e45d-72e7-4250-9757-add1efcfb054-operator-scripts\") pod \"root-account-create-update-kl9p2\" (UID: \"b912e45d-72e7-4250-9757-add1efcfb054\") " pod="openstack/root-account-create-update-kl9p2" Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.829895 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz2n6\" (UniqueName: \"kubernetes.io/projected/b912e45d-72e7-4250-9757-add1efcfb054-kube-api-access-wz2n6\") pod \"root-account-create-update-kl9p2\" (UID: \"b912e45d-72e7-4250-9757-add1efcfb054\") " pod="openstack/root-account-create-update-kl9p2" Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.933376 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b912e45d-72e7-4250-9757-add1efcfb054-operator-scripts\") pod \"root-account-create-update-kl9p2\" (UID: \"b912e45d-72e7-4250-9757-add1efcfb054\") " pod="openstack/root-account-create-update-kl9p2" Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.934164 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz2n6\" (UniqueName: \"kubernetes.io/projected/b912e45d-72e7-4250-9757-add1efcfb054-kube-api-access-wz2n6\") pod \"root-account-create-update-kl9p2\" (UID: \"b912e45d-72e7-4250-9757-add1efcfb054\") " pod="openstack/root-account-create-update-kl9p2" Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.934040 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b912e45d-72e7-4250-9757-add1efcfb054-operator-scripts\") pod \"root-account-create-update-kl9p2\" (UID: \"b912e45d-72e7-4250-9757-add1efcfb054\") " pod="openstack/root-account-create-update-kl9p2" Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.954485 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.954729 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="115a51a9-6125-46e1-a960-a66cb9957d38" containerName="cinder-scheduler" containerID="cri-o://092ec23856ddf7c87f1db2b8f8dedaf3b76e7104cefaca2c00891af5dbd0e8ec" gracePeriod=30 Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.955093 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="115a51a9-6125-46e1-a960-a66cb9957d38" containerName="probe" containerID="cri-o://bfc6d5e3d20fcf147f2a351ad85a3e522f9d2e24e1de0ae3e5b2d48bdc682cbf" gracePeriod=30 Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.970322 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5cf958d9d9-vvzkc"] Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.971830 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.989492 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.989763 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="900b2d20-01c8-47e0-8271-ccfd8549d468" containerName="cinder-api-log" containerID="cri-o://bd926e0b40deedf62e76e58772126de2d573692a9f905d9665b40c94008fd070" gracePeriod=30 Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.989884 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="900b2d20-01c8-47e0-8271-ccfd8549d468" containerName="cinder-api" containerID="cri-o://35494b429ef02861ccac7eb4515711429c34dfc143b4a511f2c7253734f037ab" gracePeriod=30 Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.016881 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-687b99dfd8-skrq6"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.018254 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.021929 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz2n6\" (UniqueName: \"kubernetes.io/projected/b912e45d-72e7-4250-9757-add1efcfb054-kube-api-access-wz2n6\") pod \"root-account-create-update-kl9p2\" (UID: \"b912e45d-72e7-4250-9757-add1efcfb054\") " pod="openstack/root-account-create-update-kl9p2" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.038621 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3d6691d-0283-4dd7-966d-ceba8bde7895-logs\") pod \"barbican-worker-5cf958d9d9-vvzkc\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.038831 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-config-data-custom\") pod \"barbican-keystone-listener-687b99dfd8-skrq6\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.038854 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-combined-ca-bundle\") pod \"barbican-worker-5cf958d9d9-vvzkc\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.038869 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-config-data\") pod \"barbican-keystone-listener-687b99dfd8-skrq6\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.038893 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-config-data-custom\") pod \"barbican-worker-5cf958d9d9-vvzkc\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.038931 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-config-data\") pod \"barbican-worker-5cf958d9d9-vvzkc\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.038980 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkhbb\" (UniqueName: \"kubernetes.io/projected/748756c2-ee60-42ce-835e-bfaa7007d7ac-kube-api-access-kkhbb\") pod \"barbican-keystone-listener-687b99dfd8-skrq6\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.038997 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/748756c2-ee60-42ce-835e-bfaa7007d7ac-logs\") pod \"barbican-keystone-listener-687b99dfd8-skrq6\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.039028 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-combined-ca-bundle\") pod \"barbican-keystone-listener-687b99dfd8-skrq6\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.039049 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdbkt\" (UniqueName: \"kubernetes.io/projected/f3d6691d-0283-4dd7-966d-ceba8bde7895-kube-api-access-xdbkt\") pod \"barbican-worker-5cf958d9d9-vvzkc\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.044459 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5cf958d9d9-vvzkc"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.060481 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-h2lm5"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.096255 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-89ff-account-create-update-fbkfk"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.098989 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-89ff-account-create-update-fbkfk" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.106328 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-h2lm5"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.116682 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.143353 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkhbb\" (UniqueName: \"kubernetes.io/projected/748756c2-ee60-42ce-835e-bfaa7007d7ac-kube-api-access-kkhbb\") pod \"barbican-keystone-listener-687b99dfd8-skrq6\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.143420 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/748756c2-ee60-42ce-835e-bfaa7007d7ac-logs\") pod \"barbican-keystone-listener-687b99dfd8-skrq6\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.143449 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8dad4bc1-b1ae-436c-925e-986d33b77e51-operator-scripts\") pod \"nova-api-89ff-account-create-update-fbkfk\" (UID: \"8dad4bc1-b1ae-436c-925e-986d33b77e51\") " pod="openstack/nova-api-89ff-account-create-update-fbkfk" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.143466 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skr4t\" (UniqueName: \"kubernetes.io/projected/8dad4bc1-b1ae-436c-925e-986d33b77e51-kube-api-access-skr4t\") pod \"nova-api-89ff-account-create-update-fbkfk\" (UID: \"8dad4bc1-b1ae-436c-925e-986d33b77e51\") " pod="openstack/nova-api-89ff-account-create-update-fbkfk" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.143498 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-combined-ca-bundle\") pod \"barbican-keystone-listener-687b99dfd8-skrq6\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.143522 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdbkt\" (UniqueName: \"kubernetes.io/projected/f3d6691d-0283-4dd7-966d-ceba8bde7895-kube-api-access-xdbkt\") pod \"barbican-worker-5cf958d9d9-vvzkc\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.143575 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3d6691d-0283-4dd7-966d-ceba8bde7895-logs\") pod \"barbican-worker-5cf958d9d9-vvzkc\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.143603 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-config-data-custom\") pod \"barbican-keystone-listener-687b99dfd8-skrq6\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.143625 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-combined-ca-bundle\") pod \"barbican-worker-5cf958d9d9-vvzkc\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.143641 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-config-data\") pod \"barbican-keystone-listener-687b99dfd8-skrq6\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.143664 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-config-data-custom\") pod \"barbican-worker-5cf958d9d9-vvzkc\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.143700 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-config-data\") pod \"barbican-worker-5cf958d9d9-vvzkc\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.153325 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/748756c2-ee60-42ce-835e-bfaa7007d7ac-logs\") pod \"barbican-keystone-listener-687b99dfd8-skrq6\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.153960 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3d6691d-0283-4dd7-966d-ceba8bde7895-logs\") pod \"barbican-worker-5cf958d9d9-vvzkc\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.164966 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-config-data-custom\") pod \"barbican-keystone-listener-687b99dfd8-skrq6\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.165074 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-combined-ca-bundle\") pod \"barbican-keystone-listener-687b99dfd8-skrq6\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.168179 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-config-data\") pod \"barbican-worker-5cf958d9d9-vvzkc\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.178984 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-combined-ca-bundle\") pod \"barbican-worker-5cf958d9d9-vvzkc\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.180377 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kl9p2" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.186482 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-config-data\") pod \"barbican-keystone-listener-687b99dfd8-skrq6\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.193077 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-config-data-custom\") pod \"barbican-worker-5cf958d9d9-vvzkc\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.225651 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-687b99dfd8-skrq6"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.248366 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8dad4bc1-b1ae-436c-925e-986d33b77e51-operator-scripts\") pod \"nova-api-89ff-account-create-update-fbkfk\" (UID: \"8dad4bc1-b1ae-436c-925e-986d33b77e51\") " pod="openstack/nova-api-89ff-account-create-update-fbkfk" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.248405 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skr4t\" (UniqueName: \"kubernetes.io/projected/8dad4bc1-b1ae-436c-925e-986d33b77e51-kube-api-access-skr4t\") pod \"nova-api-89ff-account-create-update-fbkfk\" (UID: \"8dad4bc1-b1ae-436c-925e-986d33b77e51\") " pod="openstack/nova-api-89ff-account-create-update-fbkfk" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.249435 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8dad4bc1-b1ae-436c-925e-986d33b77e51-operator-scripts\") pod \"nova-api-89ff-account-create-update-fbkfk\" (UID: \"8dad4bc1-b1ae-436c-925e-986d33b77e51\") " pod="openstack/nova-api-89ff-account-create-update-fbkfk" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.256967 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdbkt\" (UniqueName: \"kubernetes.io/projected/f3d6691d-0283-4dd7-966d-ceba8bde7895-kube-api-access-xdbkt\") pod \"barbican-worker-5cf958d9d9-vvzkc\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.271291 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-89ff-account-create-update-fbkfk"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.291270 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-17c9-account-create-update-6xs6n"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.292442 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-17c9-account-create-update-6xs6n" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.310852 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.311094 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.376438 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-89ff-account-create-update-pb4bw"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.386575 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88d00cbf-6e28-4be5-abc2-6c77e76de81e-operator-scripts\") pod \"nova-cell1-17c9-account-create-update-6xs6n\" (UID: \"88d00cbf-6e28-4be5-abc2-6c77e76de81e\") " pod="openstack/nova-cell1-17c9-account-create-update-6xs6n" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.386696 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljflm\" (UniqueName: \"kubernetes.io/projected/88d00cbf-6e28-4be5-abc2-6c77e76de81e-kube-api-access-ljflm\") pod \"nova-cell1-17c9-account-create-update-6xs6n\" (UID: \"88d00cbf-6e28-4be5-abc2-6c77e76de81e\") " pod="openstack/nova-cell1-17c9-account-create-update-6xs6n" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.388815 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skr4t\" (UniqueName: \"kubernetes.io/projected/8dad4bc1-b1ae-436c-925e-986d33b77e51-kube-api-access-skr4t\") pod \"nova-api-89ff-account-create-update-fbkfk\" (UID: \"8dad4bc1-b1ae-436c-925e-986d33b77e51\") " pod="openstack/nova-api-89ff-account-create-update-fbkfk" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.488325 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-17c9-account-create-update-hm58m"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.491361 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88d00cbf-6e28-4be5-abc2-6c77e76de81e-operator-scripts\") pod \"nova-cell1-17c9-account-create-update-6xs6n\" (UID: \"88d00cbf-6e28-4be5-abc2-6c77e76de81e\") " pod="openstack/nova-cell1-17c9-account-create-update-6xs6n" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.491439 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljflm\" (UniqueName: \"kubernetes.io/projected/88d00cbf-6e28-4be5-abc2-6c77e76de81e-kube-api-access-ljflm\") pod \"nova-cell1-17c9-account-create-update-6xs6n\" (UID: \"88d00cbf-6e28-4be5-abc2-6c77e76de81e\") " pod="openstack/nova-cell1-17c9-account-create-update-6xs6n" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.494619 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88d00cbf-6e28-4be5-abc2-6c77e76de81e-operator-scripts\") pod \"nova-cell1-17c9-account-create-update-6xs6n\" (UID: \"88d00cbf-6e28-4be5-abc2-6c77e76de81e\") " pod="openstack/nova-cell1-17c9-account-create-update-6xs6n" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.547780 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-89ff-account-create-update-pb4bw"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.562673 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljflm\" (UniqueName: \"kubernetes.io/projected/88d00cbf-6e28-4be5-abc2-6c77e76de81e-kube-api-access-ljflm\") pod \"nova-cell1-17c9-account-create-update-6xs6n\" (UID: \"88d00cbf-6e28-4be5-abc2-6c77e76de81e\") " pod="openstack/nova-cell1-17c9-account-create-update-6xs6n" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.579345 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-17c9-account-create-update-hm58m"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.611323 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-89ff-account-create-update-fbkfk" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.636916 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkhbb\" (UniqueName: \"kubernetes.io/projected/748756c2-ee60-42ce-835e-bfaa7007d7ac-kube-api-access-kkhbb\") pod \"barbican-keystone-listener-687b99dfd8-skrq6\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.653235 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.653464 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="590d1088-e964-43a6-b879-01c8b83d4147" containerName="openstackclient" containerID="cri-o://7321f950b4c167a7b34d5c400d350da10c11bc84a859361985534a57f9758316" gracePeriod=2 Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.668645 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-17c9-account-create-update-6xs6n" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.677113 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.710403 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-17c9-account-create-update-6xs6n"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.855276 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-654fdfd6b6-nrxvh"] Feb 02 07:09:14 crc kubenswrapper[4842]: E0202 07:09:14.855941 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="590d1088-e964-43a6-b879-01c8b83d4147" containerName="openstackclient" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.855953 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="590d1088-e964-43a6-b879-01c8b83d4147" containerName="openstackclient" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.856120 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="590d1088-e964-43a6-b879-01c8b83d4147" containerName="openstackclient" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.857071 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.876070 4842 generic.go:334] "Generic (PLEG): container finished" podID="900b2d20-01c8-47e0-8271-ccfd8549d468" containerID="bd926e0b40deedf62e76e58772126de2d573692a9f905d9665b40c94008fd070" exitCode=143 Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.876113 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"900b2d20-01c8-47e0-8271-ccfd8549d468","Type":"ContainerDied","Data":"bd926e0b40deedf62e76e58772126de2d573692a9f905d9665b40c94008fd070"} Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.879567 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.889632 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-654fdfd6b6-nrxvh"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.925528 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-internal-tls-certs\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.925562 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5vs6\" (UniqueName: \"kubernetes.io/projected/72b63114-a275-4e32-9ad4-9f59e22151b3-kube-api-access-h5vs6\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.925598 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72b63114-a275-4e32-9ad4-9f59e22151b3-logs\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.925635 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.925670 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-public-tls-certs\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.925688 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-combined-ca-bundle\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.925726 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data-custom\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.927961 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-7f00-account-create-update-wfvs9"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.929129 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7f00-account-create-update-wfvs9" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.934557 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.975471 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-2348-account-create-update-j8g5r"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.976769 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2348-account-create-update-j8g5r" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.980314 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.014297 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7f00-account-create-update-wfvs9"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.027024 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data-custom\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.027094 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5130c998-8bfd-413c-887e-2100da96f6ce-operator-scripts\") pod \"nova-cell0-7f00-account-create-update-wfvs9\" (UID: \"5130c998-8bfd-413c-887e-2100da96f6ce\") " pod="openstack/nova-cell0-7f00-account-create-update-wfvs9" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.027139 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2cq2\" (UniqueName: \"kubernetes.io/projected/5130c998-8bfd-413c-887e-2100da96f6ce-kube-api-access-r2cq2\") pod \"nova-cell0-7f00-account-create-update-wfvs9\" (UID: \"5130c998-8bfd-413c-887e-2100da96f6ce\") " pod="openstack/nova-cell0-7f00-account-create-update-wfvs9" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.027196 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-internal-tls-certs\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.027229 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5vs6\" (UniqueName: \"kubernetes.io/projected/72b63114-a275-4e32-9ad4-9f59e22151b3-kube-api-access-h5vs6\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.027251 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72b63114-a275-4e32-9ad4-9f59e22151b3-logs\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.027280 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.027308 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-public-tls-certs\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.027324 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-combined-ca-bundle\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.029634 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72b63114-a275-4e32-9ad4-9f59e22151b3-logs\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:15 crc kubenswrapper[4842]: E0202 07:09:15.029750 4842 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Feb 02 07:09:15 crc kubenswrapper[4842]: E0202 07:09:15.029828 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data podName:72b63114-a275-4e32-9ad4-9f59e22151b3 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:15.529808355 +0000 UTC m=+1380.907076267 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data") pod "barbican-api-654fdfd6b6-nrxvh" (UID: "72b63114-a275-4e32-9ad4-9f59e22151b3") : secret "barbican-config-data" not found Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.036397 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.042768 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data-custom\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.044419 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-public-tls-certs\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.047765 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-716d-account-create-update-x4f2v"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.051763 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-716d-account-create-update-x4f2v" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.054892 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-combined-ca-bundle\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.055790 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-internal-tls-certs\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.093631 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 02 07:09:15 crc kubenswrapper[4842]: E0202 07:09:15.105573 4842 projected.go:194] Error preparing data for projected volume kube-api-access-h5vs6 for pod openstack/barbican-api-654fdfd6b6-nrxvh: failed to fetch token: serviceaccounts "barbican-barbican" not found Feb 02 07:09:15 crc kubenswrapper[4842]: E0202 07:09:15.105670 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72b63114-a275-4e32-9ad4-9f59e22151b3-kube-api-access-h5vs6 podName:72b63114-a275-4e32-9ad4-9f59e22151b3 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:15.605649364 +0000 UTC m=+1380.982917276 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-h5vs6" (UniqueName: "kubernetes.io/projected/72b63114-a275-4e32-9ad4-9f59e22151b3-kube-api-access-h5vs6") pod "barbican-api-654fdfd6b6-nrxvh" (UID: "72b63114-a275-4e32-9ad4-9f59e22151b3") : failed to fetch token: serviceaccounts "barbican-barbican" not found Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.113075 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2348-account-create-update-j8g5r"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.129788 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81e3e639-93f4-48d1-8a2f-89e48bcc5f1d-operator-scripts\") pod \"glance-2348-account-create-update-j8g5r\" (UID: \"81e3e639-93f4-48d1-8a2f-89e48bcc5f1d\") " pod="openstack/glance-2348-account-create-update-j8g5r" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.129934 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9mmn\" (UniqueName: \"kubernetes.io/projected/e91519e6-bf55-4c08-8274-1d8a59f1ff52-kube-api-access-q9mmn\") pod \"cinder-716d-account-create-update-x4f2v\" (UID: \"e91519e6-bf55-4c08-8274-1d8a59f1ff52\") " pod="openstack/cinder-716d-account-create-update-x4f2v" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.130884 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5130c998-8bfd-413c-887e-2100da96f6ce-operator-scripts\") pod \"nova-cell0-7f00-account-create-update-wfvs9\" (UID: \"5130c998-8bfd-413c-887e-2100da96f6ce\") " pod="openstack/nova-cell0-7f00-account-create-update-wfvs9" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.131487 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5130c998-8bfd-413c-887e-2100da96f6ce-operator-scripts\") pod \"nova-cell0-7f00-account-create-update-wfvs9\" (UID: \"5130c998-8bfd-413c-887e-2100da96f6ce\") " pod="openstack/nova-cell0-7f00-account-create-update-wfvs9" Feb 02 07:09:15 crc kubenswrapper[4842]: E0202 07:09:15.131597 4842 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 02 07:09:15 crc kubenswrapper[4842]: E0202 07:09:15.131638 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-config-data podName:2b2ca532-dbbc-4148-8d2f-fc474685f0bd nodeName:}" failed. No retries permitted until 2026-02-02 07:09:15.631626138 +0000 UTC m=+1381.008894050 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-config-data") pod "rabbitmq-server-0" (UID: "2b2ca532-dbbc-4148-8d2f-fc474685f0bd") : configmap "rabbitmq-config-data" not found Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.131841 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2cq2\" (UniqueName: \"kubernetes.io/projected/5130c998-8bfd-413c-887e-2100da96f6ce-kube-api-access-r2cq2\") pod \"nova-cell0-7f00-account-create-update-wfvs9\" (UID: \"5130c998-8bfd-413c-887e-2100da96f6ce\") " pod="openstack/nova-cell0-7f00-account-create-update-wfvs9" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.139069 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-2348-account-create-update-l9hwl"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.143150 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e91519e6-bf55-4c08-8274-1d8a59f1ff52-operator-scripts\") pod \"cinder-716d-account-create-update-x4f2v\" (UID: \"e91519e6-bf55-4c08-8274-1d8a59f1ff52\") " pod="openstack/cinder-716d-account-create-update-x4f2v" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.143204 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9wrf\" (UniqueName: \"kubernetes.io/projected/81e3e639-93f4-48d1-8a2f-89e48bcc5f1d-kube-api-access-c9wrf\") pod \"glance-2348-account-create-update-j8g5r\" (UID: \"81e3e639-93f4-48d1-8a2f-89e48bcc5f1d\") " pod="openstack/glance-2348-account-create-update-j8g5r" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.151725 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-2348-account-create-update-l9hwl"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.160391 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-bfdd-account-create-update-z7blt"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.161555 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2cq2\" (UniqueName: \"kubernetes.io/projected/5130c998-8bfd-413c-887e-2100da96f6ce-kube-api-access-r2cq2\") pod \"nova-cell0-7f00-account-create-update-wfvs9\" (UID: \"5130c998-8bfd-413c-887e-2100da96f6ce\") " pod="openstack/nova-cell0-7f00-account-create-update-wfvs9" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.161621 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bfdd-account-create-update-z7blt" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.174366 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-716d-account-create-update-x4f2v"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.186621 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.232545 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-85ce-account-create-update-szhp5"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.233822 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85ce-account-create-update-szhp5" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.239035 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.246365 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90821e80-1367-4cf6-8087-fb83507223ec-operator-scripts\") pod \"neutron-bfdd-account-create-update-z7blt\" (UID: \"90821e80-1367-4cf6-8087-fb83507223ec\") " pod="openstack/neutron-bfdd-account-create-update-z7blt" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.246707 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5svcs\" (UniqueName: \"kubernetes.io/projected/90821e80-1367-4cf6-8087-fb83507223ec-kube-api-access-5svcs\") pod \"neutron-bfdd-account-create-update-z7blt\" (UID: \"90821e80-1367-4cf6-8087-fb83507223ec\") " pod="openstack/neutron-bfdd-account-create-update-z7blt" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.246741 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9mmn\" (UniqueName: \"kubernetes.io/projected/e91519e6-bf55-4c08-8274-1d8a59f1ff52-kube-api-access-q9mmn\") pod \"cinder-716d-account-create-update-x4f2v\" (UID: \"e91519e6-bf55-4c08-8274-1d8a59f1ff52\") " pod="openstack/cinder-716d-account-create-update-x4f2v" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.246879 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e91519e6-bf55-4c08-8274-1d8a59f1ff52-operator-scripts\") pod \"cinder-716d-account-create-update-x4f2v\" (UID: \"e91519e6-bf55-4c08-8274-1d8a59f1ff52\") " pod="openstack/cinder-716d-account-create-update-x4f2v" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.246911 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9wrf\" (UniqueName: \"kubernetes.io/projected/81e3e639-93f4-48d1-8a2f-89e48bcc5f1d-kube-api-access-c9wrf\") pod \"glance-2348-account-create-update-j8g5r\" (UID: \"81e3e639-93f4-48d1-8a2f-89e48bcc5f1d\") " pod="openstack/glance-2348-account-create-update-j8g5r" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.247841 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81e3e639-93f4-48d1-8a2f-89e48bcc5f1d-operator-scripts\") pod \"glance-2348-account-create-update-j8g5r\" (UID: \"81e3e639-93f4-48d1-8a2f-89e48bcc5f1d\") " pod="openstack/glance-2348-account-create-update-j8g5r" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.247896 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e91519e6-bf55-4c08-8274-1d8a59f1ff52-operator-scripts\") pod \"cinder-716d-account-create-update-x4f2v\" (UID: \"e91519e6-bf55-4c08-8274-1d8a59f1ff52\") " pod="openstack/cinder-716d-account-create-update-x4f2v" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.248783 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81e3e639-93f4-48d1-8a2f-89e48bcc5f1d-operator-scripts\") pod \"glance-2348-account-create-update-j8g5r\" (UID: \"81e3e639-93f4-48d1-8a2f-89e48bcc5f1d\") " pod="openstack/glance-2348-account-create-update-j8g5r" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.261417 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85ce-account-create-update-szhp5"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.336020 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9wrf\" (UniqueName: \"kubernetes.io/projected/81e3e639-93f4-48d1-8a2f-89e48bcc5f1d-kube-api-access-c9wrf\") pod \"glance-2348-account-create-update-j8g5r\" (UID: \"81e3e639-93f4-48d1-8a2f-89e48bcc5f1d\") " pod="openstack/glance-2348-account-create-update-j8g5r" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.346351 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7f00-account-create-update-wfvs9" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.346795 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bfdd-account-create-update-z7blt"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.351020 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc9ng\" (UniqueName: \"kubernetes.io/projected/79d5e0a1-8df4-4db1-aaf8-0d253163a522-kube-api-access-rc9ng\") pod \"placement-85ce-account-create-update-szhp5\" (UID: \"79d5e0a1-8df4-4db1-aaf8-0d253163a522\") " pod="openstack/placement-85ce-account-create-update-szhp5" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.351056 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5svcs\" (UniqueName: \"kubernetes.io/projected/90821e80-1367-4cf6-8087-fb83507223ec-kube-api-access-5svcs\") pod \"neutron-bfdd-account-create-update-z7blt\" (UID: \"90821e80-1367-4cf6-8087-fb83507223ec\") " pod="openstack/neutron-bfdd-account-create-update-z7blt" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.351169 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79d5e0a1-8df4-4db1-aaf8-0d253163a522-operator-scripts\") pod \"placement-85ce-account-create-update-szhp5\" (UID: \"79d5e0a1-8df4-4db1-aaf8-0d253163a522\") " pod="openstack/placement-85ce-account-create-update-szhp5" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.351302 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90821e80-1367-4cf6-8087-fb83507223ec-operator-scripts\") pod \"neutron-bfdd-account-create-update-z7blt\" (UID: \"90821e80-1367-4cf6-8087-fb83507223ec\") " pod="openstack/neutron-bfdd-account-create-update-z7blt" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.352003 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90821e80-1367-4cf6-8087-fb83507223ec-operator-scripts\") pod \"neutron-bfdd-account-create-update-z7blt\" (UID: \"90821e80-1367-4cf6-8087-fb83507223ec\") " pod="openstack/neutron-bfdd-account-create-update-z7blt" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.363503 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9mmn\" (UniqueName: \"kubernetes.io/projected/e91519e6-bf55-4c08-8274-1d8a59f1ff52-kube-api-access-q9mmn\") pod \"cinder-716d-account-create-update-x4f2v\" (UID: \"e91519e6-bf55-4c08-8274-1d8a59f1ff52\") " pod="openstack/cinder-716d-account-create-update-x4f2v" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.375368 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5svcs\" (UniqueName: \"kubernetes.io/projected/90821e80-1367-4cf6-8087-fb83507223ec-kube-api-access-5svcs\") pod \"neutron-bfdd-account-create-update-z7blt\" (UID: \"90821e80-1367-4cf6-8087-fb83507223ec\") " pod="openstack/neutron-bfdd-account-create-update-z7blt" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.379558 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7f00-account-create-update-llc96"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.395971 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-7f00-account-create-update-llc96"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.425150 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.425821 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="bff6dd37-52b7-41b4-bc15-4f6436cdabc7" containerName="openstack-network-exporter" containerID="cri-o://12cbd4046092af30937f505c373f7a1da7ef6152e4425d8dee20e3b127f7d573" gracePeriod=300 Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.464553 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79d5e0a1-8df4-4db1-aaf8-0d253163a522-operator-scripts\") pod \"placement-85ce-account-create-update-szhp5\" (UID: \"79d5e0a1-8df4-4db1-aaf8-0d253163a522\") " pod="openstack/placement-85ce-account-create-update-szhp5" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.464867 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2348-account-create-update-j8g5r" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.465727 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79d5e0a1-8df4-4db1-aaf8-0d253163a522-operator-scripts\") pod \"placement-85ce-account-create-update-szhp5\" (UID: \"79d5e0a1-8df4-4db1-aaf8-0d253163a522\") " pod="openstack/placement-85ce-account-create-update-szhp5" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.464872 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc9ng\" (UniqueName: \"kubernetes.io/projected/79d5e0a1-8df4-4db1-aaf8-0d253163a522-kube-api-access-rc9ng\") pod \"placement-85ce-account-create-update-szhp5\" (UID: \"79d5e0a1-8df4-4db1-aaf8-0d253163a522\") " pod="openstack/placement-85ce-account-create-update-szhp5" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.521637 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-716d-account-create-update-x4f2v" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.543893 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc9ng\" (UniqueName: \"kubernetes.io/projected/79d5e0a1-8df4-4db1-aaf8-0d253163a522-kube-api-access-rc9ng\") pod \"placement-85ce-account-create-update-szhp5\" (UID: \"79d5e0a1-8df4-4db1-aaf8-0d253163a522\") " pod="openstack/placement-85ce-account-create-update-szhp5" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.571137 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52bba199-2794-4828-9a54-e1aac49fb223" path="/var/lib/kubelet/pods/52bba199-2794-4828-9a54-e1aac49fb223/volumes" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.572010 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="668f221e-e491-4ec6-9f40-82dd1afc3ac8" path="/var/lib/kubelet/pods/668f221e-e491-4ec6-9f40-82dd1afc3ac8/volumes" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.573357 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9d15d01-9c12-4b4f-9cec-037a1d21fab1" path="/var/lib/kubelet/pods/a9d15d01-9c12-4b4f-9cec-037a1d21fab1/volumes" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.573937 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0cbe107-ad1a-47aa-9b91-4a08c8b712fb" path="/var/lib/kubelet/pods/e0cbe107-ad1a-47aa-9b91-4a08c8b712fb/volumes" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.575197 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef83800c-79dc-4cfa-9f7c-194a44995d12" path="/var/lib/kubelet/pods/ef83800c-79dc-4cfa-9f7c-194a44995d12/volumes" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.577767 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-8e42-account-create-update-pssf7"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.580766 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8e42-account-create-update-pssf7"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.580850 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8e42-account-create-update-pssf7" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.582764 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.583275 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:15 crc kubenswrapper[4842]: E0202 07:09:15.583824 4842 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Feb 02 07:09:15 crc kubenswrapper[4842]: E0202 07:09:15.583879 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data podName:72b63114-a275-4e32-9ad4-9f59e22151b3 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:16.583863617 +0000 UTC m=+1381.961131529 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data") pod "barbican-api-654fdfd6b6-nrxvh" (UID: "72b63114-a275-4e32-9ad4-9f59e22151b3") : secret "barbican-config-data" not found Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.597116 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bfdd-account-create-update-z7blt" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.602176 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="bff6dd37-52b7-41b4-bc15-4f6436cdabc7" containerName="ovsdbserver-nb" containerID="cri-o://c1acee4708434e2281340e86c5dcc1aec94647c18fa79ec17661ad1f08020e9f" gracePeriod=300 Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.608077 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85ce-account-create-update-szhp5" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.645562 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-716d-account-create-update-ft5kt"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.668273 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bfdd-account-create-update-rws4k"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.683727 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-716d-account-create-update-ft5kt"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.684698 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cg6x\" (UniqueName: \"kubernetes.io/projected/92090cd2-6d30-4aec-81a2-f7d41c40b52d-kube-api-access-8cg6x\") pod \"barbican-8e42-account-create-update-pssf7\" (UID: \"92090cd2-6d30-4aec-81a2-f7d41c40b52d\") " pod="openstack/barbican-8e42-account-create-update-pssf7" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.684779 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92090cd2-6d30-4aec-81a2-f7d41c40b52d-operator-scripts\") pod \"barbican-8e42-account-create-update-pssf7\" (UID: \"92090cd2-6d30-4aec-81a2-f7d41c40b52d\") " pod="openstack/barbican-8e42-account-create-update-pssf7" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.684854 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5vs6\" (UniqueName: \"kubernetes.io/projected/72b63114-a275-4e32-9ad4-9f59e22151b3-kube-api-access-h5vs6\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:15 crc kubenswrapper[4842]: E0202 07:09:15.685089 4842 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 02 07:09:15 crc kubenswrapper[4842]: E0202 07:09:15.685160 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-config-data podName:2b2ca532-dbbc-4148-8d2f-fc474685f0bd nodeName:}" failed. No retries permitted until 2026-02-02 07:09:16.685140726 +0000 UTC m=+1382.062408638 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-config-data") pod "rabbitmq-server-0" (UID: "2b2ca532-dbbc-4148-8d2f-fc474685f0bd") : configmap "rabbitmq-config-data" not found Feb 02 07:09:15 crc kubenswrapper[4842]: E0202 07:09:15.694193 4842 projected.go:194] Error preparing data for projected volume kube-api-access-h5vs6 for pod openstack/barbican-api-654fdfd6b6-nrxvh: failed to fetch token: serviceaccounts "barbican-barbican" not found Feb 02 07:09:15 crc kubenswrapper[4842]: E0202 07:09:15.694290 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72b63114-a275-4e32-9ad4-9f59e22151b3-kube-api-access-h5vs6 podName:72b63114-a275-4e32-9ad4-9f59e22151b3 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:16.694269959 +0000 UTC m=+1382.071537871 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-h5vs6" (UniqueName: "kubernetes.io/projected/72b63114-a275-4e32-9ad4-9f59e22151b3-kube-api-access-h5vs6") pod "barbican-api-654fdfd6b6-nrxvh" (UID: "72b63114-a275-4e32-9ad4-9f59e22151b3") : failed to fetch token: serviceaccounts "barbican-barbican" not found Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.725457 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-bfdd-account-create-update-rws4k"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.743763 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-85ce-account-create-update-rxmcp"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.757980 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-77gxn"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.773331 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8e42-account-create-update-mtd79"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.785701 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-77gxn"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.787670 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92090cd2-6d30-4aec-81a2-f7d41c40b52d-operator-scripts\") pod \"barbican-8e42-account-create-update-pssf7\" (UID: \"92090cd2-6d30-4aec-81a2-f7d41c40b52d\") " pod="openstack/barbican-8e42-account-create-update-pssf7" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.788003 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cg6x\" (UniqueName: \"kubernetes.io/projected/92090cd2-6d30-4aec-81a2-f7d41c40b52d-kube-api-access-8cg6x\") pod \"barbican-8e42-account-create-update-pssf7\" (UID: \"92090cd2-6d30-4aec-81a2-f7d41c40b52d\") " pod="openstack/barbican-8e42-account-create-update-pssf7" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.788856 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92090cd2-6d30-4aec-81a2-f7d41c40b52d-operator-scripts\") pod \"barbican-8e42-account-create-update-pssf7\" (UID: \"92090cd2-6d30-4aec-81a2-f7d41c40b52d\") " pod="openstack/barbican-8e42-account-create-update-pssf7" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.800318 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-85ce-account-create-update-rxmcp"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.813830 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cg6x\" (UniqueName: \"kubernetes.io/projected/92090cd2-6d30-4aec-81a2-f7d41c40b52d-kube-api-access-8cg6x\") pod \"barbican-8e42-account-create-update-pssf7\" (UID: \"92090cd2-6d30-4aec-81a2-f7d41c40b52d\") " pod="openstack/barbican-8e42-account-create-update-pssf7" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.813976 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-8e42-account-create-update-mtd79"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.829002 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-d648k"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.837361 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-phj68"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.859802 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-phj68"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.866862 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-d648k"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.876090 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-7qxb9"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.885487 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.885798 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="6064786a-fa53-47a7-88ee-384cf70a86c6" containerName="ovn-northd" containerID="cri-o://6b0de6a9b1a36bc3d2910cbd8bed0ec4d6b0a971b7c05c08ccf5a0c3fa8afa6c" gracePeriod=30 Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.886193 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="6064786a-fa53-47a7-88ee-384cf70a86c6" containerName="openstack-network-exporter" containerID="cri-o://e96862cf77fa128f12f3b9982dfad78848395bebaf2c0c3ff7a1cca181e725f0" gracePeriod=30 Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.895737 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-7qxb9"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.923396 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-2ddsf"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.926015 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.928100 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-2ddsf"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.932410 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bff6dd37-52b7-41b4-bc15-4f6436cdabc7/ovsdbserver-nb/0.log" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.932450 4842 generic.go:334] "Generic (PLEG): container finished" podID="bff6dd37-52b7-41b4-bc15-4f6436cdabc7" containerID="12cbd4046092af30937f505c373f7a1da7ef6152e4425d8dee20e3b127f7d573" exitCode=2 Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.932466 4842 generic.go:334] "Generic (PLEG): container finished" podID="bff6dd37-52b7-41b4-bc15-4f6436cdabc7" containerID="c1acee4708434e2281340e86c5dcc1aec94647c18fa79ec17661ad1f08020e9f" exitCode=143 Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.932529 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bff6dd37-52b7-41b4-bc15-4f6436cdabc7","Type":"ContainerDied","Data":"12cbd4046092af30937f505c373f7a1da7ef6152e4425d8dee20e3b127f7d573"} Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.932554 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bff6dd37-52b7-41b4-bc15-4f6436cdabc7","Type":"ContainerDied","Data":"c1acee4708434e2281340e86c5dcc1aec94647c18fa79ec17661ad1f08020e9f"} Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.936365 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-rpkx6"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.944497 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-rpkx6"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.961978 4842 generic.go:334] "Generic (PLEG): container finished" podID="115a51a9-6125-46e1-a960-a66cb9957d38" containerID="092ec23856ddf7c87f1db2b8f8dedaf3b76e7104cefaca2c00891af5dbd0e8ec" exitCode=0 Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.962045 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"115a51a9-6125-46e1-a960-a66cb9957d38","Type":"ContainerDied","Data":"092ec23856ddf7c87f1db2b8f8dedaf3b76e7104cefaca2c00891af5dbd0e8ec"} Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.969236 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cf958d9d9-vvzkc" event={"ID":"f3d6691d-0283-4dd7-966d-ceba8bde7895","Type":"ContainerStarted","Data":"d69c45eb45e674be84418f12982b88cbb7cb13f89d733e29e26157326878116c"} Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.971339 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-4glck"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.971524 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-4glck" podUID="a768c72b-df6d-463e-b085-996d7b910985" containerName="openstack-network-exporter" containerID="cri-o://a62e03cec1bb8e57732f90cf545c9f9612917cecf937c100e89f185e517fa7dd" gracePeriod=30 Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.978030 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sgwrm"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.987716 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-vctt8"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:15.994875 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:15.995431 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="a31583c1-5fde-4763-a889-7257255fa217" containerName="openstack-network-exporter" containerID="cri-o://c2eb9657c42f955c0263cd3a4cee2ba4741ed6bed3e4fa84ae9f59564a660266" gracePeriod=300 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.002838 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8e42-account-create-update-pssf7" Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.030502 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kl9p2" event={"ID":"b912e45d-72e7-4250-9757-add1efcfb054","Type":"ContainerStarted","Data":"c436c98ac030592508317571235d4b580f2fca45d60bf44a940ecdb59f089266"} Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.040795 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-8dp78"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.041139 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ddd577785-8dp78" podUID="82827ec9-ac05-41ab-988c-99083ccdb949" containerName="dnsmasq-dns" containerID="cri-o://b1f4bec090a15a8f33492373710dad94faf1e40a938d6cc9e964fd93f07eecf3" gracePeriod=10 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.060228 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-sjstk"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.087087 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-sjstk"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.144388 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="a31583c1-5fde-4763-a889-7257255fa217" containerName="ovsdbserver-sb" containerID="cri-o://6cd00133afde786f3f39678d68f6c38b74703143640c9ef32412c8efe7f5aec9" gracePeriod=300 Feb 02 07:09:16 crc kubenswrapper[4842]: E0202 07:09:16.190880 4842 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-sgwrm" message=< Feb 02 07:09:16 crc kubenswrapper[4842]: Exiting ovn-controller (1) [ OK ] Feb 02 07:09:16 crc kubenswrapper[4842]: > Feb 02 07:09:16 crc kubenswrapper[4842]: E0202 07:09:16.190938 4842 kuberuntime_container.go:691] "PreStop hook failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " pod="openstack/ovn-controller-sgwrm" podUID="e467a49f-fdc1-4a9e-9907-4425f5ec6177" containerName="ovn-controller" containerID="cri-o://42408d707e9e2078b40d0e9f4ce34644fc07f209b2994b218bbf5f92d1f39ea7" Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.190972 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-sgwrm" podUID="e467a49f-fdc1-4a9e-9907-4425f5ec6177" containerName="ovn-controller" containerID="cri-o://42408d707e9e2078b40d0e9f4ce34644fc07f209b2994b218bbf5f92d1f39ea7" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.198010 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.200477 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="54aa018a-3e7e-4c95-9c1d-387543ed5af0" containerName="nova-metadata-metadata" containerID="cri-o://c6b2aef7c5907fec1f821bb206e985dfa1c10ebd9ed998f2f05ec13c6cf132ab" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.200610 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="54aa018a-3e7e-4c95-9c1d-387543ed5af0" containerName="nova-metadata-log" containerID="cri-o://415d21f9580ea68e52aa649eacebbe3550d2da28410a54eb695a4a912d91fbdd" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.238730 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5cf958d9d9-vvzkc"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.263700 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.285202 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.285546 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="25609b1c-e1e9-4633-b3e3-93bd2f4396de" containerName="nova-api-log" containerID="cri-o://1f08602808f0c1da9b996db624f132bc20c5b91004db8c9c6f2ffa67741d3bbc" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.290521 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="25609b1c-e1e9-4633-b3e3-93bd2f4396de" containerName="nova-api-api" containerID="cri-o://bebe8c74ad90a2dc028ad9e30942ced9f67c8af8df16026b5b89379d97e80e00" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: E0202 07:09:16.333577 4842 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 02 07:09:16 crc kubenswrapper[4842]: E0202 07:09:16.333896 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-config-data podName:441d47f7-e5dd-456f-b6fa-10a642be6742 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:16.833875717 +0000 UTC m=+1382.211143629 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-config-data") pod "rabbitmq-cell1-server-0" (UID: "441d47f7-e5dd-456f-b6fa-10a642be6742") : configmap "rabbitmq-cell1-config-data" not found Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.368272 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.403690 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-kbdxw"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.419597 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-kbdxw"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.449365 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5b5c67fdbd-zsx96"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.450298 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5b5c67fdbd-zsx96" podUID="c56025ce-3772-435d-bdba-a4d1ba9d6e2f" containerName="placement-log" containerID="cri-o://6586c2e8f7af2e360086efaa4a8a6c6f2493d034bdc7ef3f3fa3fe1325d17da7" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.451264 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5b5c67fdbd-zsx96" podUID="c56025ce-3772-435d-bdba-a4d1ba9d6e2f" containerName="placement-api" containerID="cri-o://c1cc1b81874f37b6dd69a794f4c89e58f1e938624f539804095c18ceb3989c67" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.460520 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.460753 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6c96a7e1-78c3-449d-9200-735db4ee7086" containerName="glance-log" containerID="cri-o://baeb51b0b4bb9444bd98551a3cc3dcb68f182ab93c0b62223c4c0a0707790ceb" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.460877 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6c96a7e1-78c3-449d-9200-735db4ee7086" containerName="glance-httpd" containerID="cri-o://50694d5591176c65770672c30837d60f3438d04ee3ca91b5bc53b0366f9835df" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.477736 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kl9p2"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.513236 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-jph4l"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.527757 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-jph4l"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.569751 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7f00-account-create-update-wfvs9"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.581164 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.581403 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="34f55116-a518-4f21-8816-6f8232a6f68d" containerName="glance-log" containerID="cri-o://c593d09b2735487782551786767a4ed77fad095c2d0a78c5ed62f1b78de5ce7e" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.581814 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="34f55116-a518-4f21-8816-6f8232a6f68d" containerName="glance-httpd" containerID="cri-o://72e60f391adc327a7666947b2251ee7da0c5b5a42927991c1ba5e739d160e596" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.594826 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6684555597-gjtgz"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.595045 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6684555597-gjtgz" podUID="953bf671-ca79-4208-9bab-672dc079dd82" containerName="neutron-api" containerID="cri-o://679d0126323f1cafc695474001597b9d37c1a23ba5158a00e7f240fffa003eca" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.595458 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6684555597-gjtgz" podUID="953bf671-ca79-4208-9bab-672dc079dd82" containerName="neutron-httpd" containerID="cri-o://69048ee01a49fa4ed888b0c135134e06af01f907b56780330edbc72e09136e83" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.603331 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-hhd7d"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.616417 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-hhd7d"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.641501 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:16 crc kubenswrapper[4842]: E0202 07:09:16.641718 4842 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Feb 02 07:09:16 crc kubenswrapper[4842]: E0202 07:09:16.641766 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data podName:72b63114-a275-4e32-9ad4-9f59e22151b3 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:18.641749666 +0000 UTC m=+1384.019017568 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data") pod "barbican-api-654fdfd6b6-nrxvh" (UID: "72b63114-a275-4e32-9ad4-9f59e22151b3") : secret "barbican-config-data" not found Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.669496 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.670398 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="account-server" containerID="cri-o://496f7c8f3a8e1190f069f9d123dad4f03c5ddc2c339a3a530d938ce75113f766" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.670487 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="container-server" containerID="cri-o://78ea2470e0bb66602235ee6f953b1cb50c60bbf2dda3d60aa9ded3436730161c" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.670463 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-server" containerID="cri-o://5fe6ac9847ee5629c3a3a2ccb929b05946534e86d95fae65cd97cbab654c7391" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.670672 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-expirer" containerID="cri-o://c3ceba27f85cf9e18b4c96e9c35e3e830a3840e245ff37876679745418c599df" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.670642 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="container-auditor" containerID="cri-o://98d05e29848a090df093dcb34910845ebd22086e918c4b510210550b0fcd98f9" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.670651 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="container-replicator" containerID="cri-o://84a64916ad5a870dd2730290e371bd4ee7a327af7bfa716ae7b3457657e3b792" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.670733 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-updater" containerID="cri-o://11c87109b1d73f0312d44a7a194b500b7f7e551073a65468bc291891955fd1d1" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.670750 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="swift-recon-cron" containerID="cri-o://a0ba4c6bbf6b05d401f52ab663d9f47cbde0cebb5dfcb8997ff120cffdd05060" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.670783 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="rsync" containerID="cri-o://419e27de3686d1a75400d18f391cbe54519868631357cce324a86c057a1dbbfe" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.670796 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-auditor" containerID="cri-o://3accf74226bf0263e16fdcc906f97a58d41768cb604252689a8c7a9fac50f04f" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.670812 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-replicator" containerID="cri-o://a6f0be0e71192334da01f394f7e0075f3ff472a60d737f40449f0c7c56b45801" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.670851 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="account-reaper" containerID="cri-o://1864c37f5464bef32be4591740d73c6be777716e778338b57e2c23f30b098973" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.670890 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="account-auditor" containerID="cri-o://81e3b07657ef3f1d8e0c81f783b14b3167b42779f998c664f2c184857a6ffc8b" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.670629 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="container-updater" containerID="cri-o://94a480917554fbdc9c94fdc240db04a25556fac19911eb5945a6838a7169e5f3" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.670892 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="account-replicator" containerID="cri-o://0579b6675bbca573212a34273ea354bc485d0dead5d30e277230eaf0ce0b9594" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.743503 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5vs6\" (UniqueName: \"kubernetes.io/projected/72b63114-a275-4e32-9ad4-9f59e22151b3-kube-api-access-h5vs6\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:16 crc kubenswrapper[4842]: E0202 07:09:16.743788 4842 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 02 07:09:16 crc kubenswrapper[4842]: E0202 07:09:16.743839 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-config-data podName:2b2ca532-dbbc-4148-8d2f-fc474685f0bd nodeName:}" failed. No retries permitted until 2026-02-02 07:09:18.743822425 +0000 UTC m=+1384.121090337 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-config-data") pod "rabbitmq-server-0" (UID: "2b2ca532-dbbc-4148-8d2f-fc474685f0bd") : configmap "rabbitmq-config-data" not found Feb 02 07:09:16 crc kubenswrapper[4842]: E0202 07:09:16.747207 4842 projected.go:194] Error preparing data for projected volume kube-api-access-h5vs6 for pod openstack/barbican-api-654fdfd6b6-nrxvh: failed to fetch token: serviceaccounts "barbican-barbican" not found Feb 02 07:09:16 crc kubenswrapper[4842]: E0202 07:09:16.747278 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72b63114-a275-4e32-9ad4-9f59e22151b3-kube-api-access-h5vs6 podName:72b63114-a275-4e32-9ad4-9f59e22151b3 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:18.747263313 +0000 UTC m=+1384.124531225 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-h5vs6" (UniqueName: "kubernetes.io/projected/72b63114-a275-4e32-9ad4-9f59e22151b3-kube-api-access-h5vs6") pod "barbican-api-654fdfd6b6-nrxvh" (UID: "72b63114-a275-4e32-9ad4-9f59e22151b3") : failed to fetch token: serviceaccounts "barbican-barbican" not found Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.755137 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 02 07:09:16 crc kubenswrapper[4842]: E0202 07:09:16.764029 4842 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 02 07:09:16 crc kubenswrapper[4842]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Feb 02 07:09:16 crc kubenswrapper[4842]: Feb 02 07:09:16 crc kubenswrapper[4842]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 02 07:09:16 crc kubenswrapper[4842]: Feb 02 07:09:16 crc kubenswrapper[4842]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 02 07:09:16 crc kubenswrapper[4842]: Feb 02 07:09:16 crc kubenswrapper[4842]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 02 07:09:16 crc kubenswrapper[4842]: Feb 02 07:09:16 crc kubenswrapper[4842]: if [ -n "nova_api" ]; then Feb 02 07:09:16 crc kubenswrapper[4842]: GRANT_DATABASE="nova_api" Feb 02 07:09:16 crc kubenswrapper[4842]: else Feb 02 07:09:16 crc kubenswrapper[4842]: GRANT_DATABASE="*" Feb 02 07:09:16 crc kubenswrapper[4842]: fi Feb 02 07:09:16 crc kubenswrapper[4842]: Feb 02 07:09:16 crc kubenswrapper[4842]: # going for maximum compatibility here: Feb 02 07:09:16 crc kubenswrapper[4842]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 02 07:09:16 crc kubenswrapper[4842]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 02 07:09:16 crc kubenswrapper[4842]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 02 07:09:16 crc kubenswrapper[4842]: # support updates Feb 02 07:09:16 crc kubenswrapper[4842]: Feb 02 07:09:16 crc kubenswrapper[4842]: $MYSQL_CMD < logger="UnhandledError" Feb 02 07:09:16 crc kubenswrapper[4842]: E0202 07:09:16.768864 4842 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Feb 02 07:09:16 crc kubenswrapper[4842]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Feb 02 07:09:16 crc kubenswrapper[4842]: + source /usr/local/bin/container-scripts/functions Feb 02 07:09:16 crc kubenswrapper[4842]: ++ OVNBridge=br-int Feb 02 07:09:16 crc kubenswrapper[4842]: ++ OVNRemote=tcp:localhost:6642 Feb 02 07:09:16 crc kubenswrapper[4842]: ++ OVNEncapType=geneve Feb 02 07:09:16 crc kubenswrapper[4842]: ++ OVNAvailabilityZones= Feb 02 07:09:16 crc kubenswrapper[4842]: ++ EnableChassisAsGateway=true Feb 02 07:09:16 crc kubenswrapper[4842]: ++ PhysicalNetworks= Feb 02 07:09:16 crc kubenswrapper[4842]: ++ OVNHostName= Feb 02 07:09:16 crc kubenswrapper[4842]: ++ DB_FILE=/etc/openvswitch/conf.db Feb 02 07:09:16 crc kubenswrapper[4842]: ++ ovs_dir=/var/lib/openvswitch Feb 02 07:09:16 crc kubenswrapper[4842]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Feb 02 07:09:16 crc kubenswrapper[4842]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Feb 02 07:09:16 crc kubenswrapper[4842]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 02 07:09:16 crc kubenswrapper[4842]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 02 07:09:16 crc kubenswrapper[4842]: + sleep 0.5 Feb 02 07:09:16 crc kubenswrapper[4842]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 02 07:09:16 crc kubenswrapper[4842]: + cleanup_ovsdb_server_semaphore Feb 02 07:09:16 crc kubenswrapper[4842]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 02 07:09:16 crc kubenswrapper[4842]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Feb 02 07:09:16 crc kubenswrapper[4842]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-vctt8" message=< Feb 02 07:09:16 crc kubenswrapper[4842]: Exiting ovsdb-server (5) [ OK ] Feb 02 07:09:16 crc kubenswrapper[4842]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Feb 02 07:09:16 crc kubenswrapper[4842]: + source /usr/local/bin/container-scripts/functions Feb 02 07:09:16 crc kubenswrapper[4842]: ++ OVNBridge=br-int Feb 02 07:09:16 crc kubenswrapper[4842]: ++ OVNRemote=tcp:localhost:6642 Feb 02 07:09:16 crc kubenswrapper[4842]: ++ OVNEncapType=geneve Feb 02 07:09:16 crc kubenswrapper[4842]: ++ OVNAvailabilityZones= Feb 02 07:09:16 crc kubenswrapper[4842]: ++ EnableChassisAsGateway=true Feb 02 07:09:16 crc kubenswrapper[4842]: ++ PhysicalNetworks= Feb 02 07:09:16 crc kubenswrapper[4842]: ++ OVNHostName= Feb 02 07:09:16 crc kubenswrapper[4842]: ++ DB_FILE=/etc/openvswitch/conf.db Feb 02 07:09:16 crc kubenswrapper[4842]: ++ ovs_dir=/var/lib/openvswitch Feb 02 07:09:16 crc kubenswrapper[4842]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Feb 02 07:09:16 crc kubenswrapper[4842]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Feb 02 07:09:16 crc kubenswrapper[4842]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 02 07:09:16 crc kubenswrapper[4842]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 02 07:09:16 crc kubenswrapper[4842]: + sleep 0.5 Feb 02 07:09:16 crc kubenswrapper[4842]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 02 07:09:16 crc kubenswrapper[4842]: + cleanup_ovsdb_server_semaphore Feb 02 07:09:16 crc kubenswrapper[4842]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 02 07:09:16 crc kubenswrapper[4842]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Feb 02 07:09:16 crc kubenswrapper[4842]: > Feb 02 07:09:16 crc kubenswrapper[4842]: E0202 07:09:16.768905 4842 kuberuntime_container.go:691] "PreStop hook failed" err=< Feb 02 07:09:16 crc kubenswrapper[4842]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Feb 02 07:09:16 crc kubenswrapper[4842]: + source /usr/local/bin/container-scripts/functions Feb 02 07:09:16 crc kubenswrapper[4842]: ++ OVNBridge=br-int Feb 02 07:09:16 crc kubenswrapper[4842]: ++ OVNRemote=tcp:localhost:6642 Feb 02 07:09:16 crc kubenswrapper[4842]: ++ OVNEncapType=geneve Feb 02 07:09:16 crc kubenswrapper[4842]: ++ OVNAvailabilityZones= Feb 02 07:09:16 crc kubenswrapper[4842]: ++ EnableChassisAsGateway=true Feb 02 07:09:16 crc kubenswrapper[4842]: ++ PhysicalNetworks= Feb 02 07:09:16 crc kubenswrapper[4842]: ++ OVNHostName= Feb 02 07:09:16 crc kubenswrapper[4842]: ++ DB_FILE=/etc/openvswitch/conf.db Feb 02 07:09:16 crc kubenswrapper[4842]: ++ ovs_dir=/var/lib/openvswitch Feb 02 07:09:16 crc kubenswrapper[4842]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Feb 02 07:09:16 crc kubenswrapper[4842]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Feb 02 07:09:16 crc kubenswrapper[4842]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 02 07:09:16 crc kubenswrapper[4842]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 02 07:09:16 crc kubenswrapper[4842]: + sleep 0.5 Feb 02 07:09:16 crc kubenswrapper[4842]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 02 07:09:16 crc kubenswrapper[4842]: + cleanup_ovsdb_server_semaphore Feb 02 07:09:16 crc kubenswrapper[4842]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 02 07:09:16 crc kubenswrapper[4842]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Feb 02 07:09:16 crc kubenswrapper[4842]: > pod="openstack/ovn-controller-ovs-vctt8" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovsdb-server" containerID="cri-o://a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.768936 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-vctt8" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovsdb-server" containerID="cri-o://a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: E0202 07:09:16.769198 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-89ff-account-create-update-fbkfk" podUID="8dad4bc1-b1ae-436c-925e-986d33b77e51" Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.779407 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-vsjtz"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.791711 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-vctt8" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovs-vswitchd" containerID="cri-o://3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.805720 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-vsjtz"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.820418 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-716d-account-create-update-x4f2v"] Feb 02 07:09:16 crc kubenswrapper[4842]: W0202 07:09:16.830929 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod748756c2_ee60_42ce_835e_bfaa7007d7ac.slice/crio-09ed8d05d994b4f10b7eef605b2f606beee05a7896873233e85ba84f7bd5475e WatchSource:0}: Error finding container 09ed8d05d994b4f10b7eef605b2f606beee05a7896873233e85ba84f7bd5475e: Status 404 returned error can't find the container with id 09ed8d05d994b4f10b7eef605b2f606beee05a7896873233e85ba84f7bd5475e Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.830986 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-2348-account-create-update-j8g5r"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.839133 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-dg9pd"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.840780 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="bed4dadb-b854-4082-b18a-67f58543bb9a" containerName="galera" containerID="cri-o://6befc904ad1bc362edb2452ad98dace7a8d19908d934b410bdb62de4fb72339d" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.845280 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.845664 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-dg9pd"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.850322 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-scripts\") pod \"115a51a9-6125-46e1-a960-a66cb9957d38\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.850392 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmstk\" (UniqueName: \"kubernetes.io/projected/115a51a9-6125-46e1-a960-a66cb9957d38-kube-api-access-wmstk\") pod \"115a51a9-6125-46e1-a960-a66cb9957d38\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.850462 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-config-data-custom\") pod \"115a51a9-6125-46e1-a960-a66cb9957d38\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.850521 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-config-data\") pod \"115a51a9-6125-46e1-a960-a66cb9957d38\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.850553 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/115a51a9-6125-46e1-a960-a66cb9957d38-etc-machine-id\") pod \"115a51a9-6125-46e1-a960-a66cb9957d38\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.850636 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-combined-ca-bundle\") pod \"115a51a9-6125-46e1-a960-a66cb9957d38\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.852788 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/115a51a9-6125-46e1-a960-a66cb9957d38-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "115a51a9-6125-46e1-a960-a66cb9957d38" (UID: "115a51a9-6125-46e1-a960-a66cb9957d38"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:09:16 crc kubenswrapper[4842]: E0202 07:09:16.852865 4842 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 02 07:09:16 crc kubenswrapper[4842]: E0202 07:09:16.852916 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-config-data podName:441d47f7-e5dd-456f-b6fa-10a642be6742 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:17.852900863 +0000 UTC m=+1383.230168775 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-config-data") pod "rabbitmq-cell1-server-0" (UID: "441d47f7-e5dd-456f-b6fa-10a642be6742") : configmap "rabbitmq-cell1-config-data" not found Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.852970 4842 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/115a51a9-6125-46e1-a960-a66cb9957d38-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.853201 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-89ff-account-create-update-fbkfk"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.860156 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.860387 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="3a6e38b7-4a6d-4d93-af3d-5abac4efc44d" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://19ce3a33fe25413f4f312112bb88f2cc8ceb19171589dbec9313d4c51f900ca1" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.864503 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-scripts" (OuterVolumeSpecName: "scripts") pod "115a51a9-6125-46e1-a960-a66cb9957d38" (UID: "115a51a9-6125-46e1-a960-a66cb9957d38"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.866275 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/115a51a9-6125-46e1-a960-a66cb9957d38-kube-api-access-wmstk" (OuterVolumeSpecName: "kube-api-access-wmstk") pod "115a51a9-6125-46e1-a960-a66cb9957d38" (UID: "115a51a9-6125-46e1-a960-a66cb9957d38"). InnerVolumeSpecName "kube-api-access-wmstk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.867830 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "115a51a9-6125-46e1-a960-a66cb9957d38" (UID: "115a51a9-6125-46e1-a960-a66cb9957d38"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.868625 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-p28sd"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.876870 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-p28sd"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.896070 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-85ce-account-create-update-szhp5"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.928855 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bfdd-account-create-update-z7blt"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.946380 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-8p487"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:16.956396 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmstk\" (UniqueName: \"kubernetes.io/projected/115a51a9-6125-46e1-a960-a66cb9957d38-kube-api-access-wmstk\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:16.956421 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:16.956433 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:16.959115 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-79v8r"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:16.971796 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-17c9-account-create-update-6xs6n"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:16.979604 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-8p487"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:16.981866 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bff6dd37-52b7-41b4-bc15-4f6436cdabc7/ovsdbserver-nb/0.log" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:16.981931 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:16.987041 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-79v8r"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.021642 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.026655 4842 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 02 07:09:17 crc kubenswrapper[4842]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Feb 02 07:09:17 crc kubenswrapper[4842]: Feb 02 07:09:17 crc kubenswrapper[4842]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 02 07:09:17 crc kubenswrapper[4842]: Feb 02 07:09:17 crc kubenswrapper[4842]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 02 07:09:17 crc kubenswrapper[4842]: Feb 02 07:09:17 crc kubenswrapper[4842]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 02 07:09:17 crc kubenswrapper[4842]: Feb 02 07:09:17 crc kubenswrapper[4842]: if [ -n "nova_cell1" ]; then Feb 02 07:09:17 crc kubenswrapper[4842]: GRANT_DATABASE="nova_cell1" Feb 02 07:09:17 crc kubenswrapper[4842]: else Feb 02 07:09:17 crc kubenswrapper[4842]: GRANT_DATABASE="*" Feb 02 07:09:17 crc kubenswrapper[4842]: fi Feb 02 07:09:17 crc kubenswrapper[4842]: Feb 02 07:09:17 crc kubenswrapper[4842]: # going for maximum compatibility here: Feb 02 07:09:17 crc kubenswrapper[4842]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 02 07:09:17 crc kubenswrapper[4842]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 02 07:09:17 crc kubenswrapper[4842]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 02 07:09:17 crc kubenswrapper[4842]: # support updates Feb 02 07:09:17 crc kubenswrapper[4842]: Feb 02 07:09:17 crc kubenswrapper[4842]: $MYSQL_CMD < logger="UnhandledError" Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.028827 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell1-db-secret\\\" not found\"" pod="openstack/nova-cell1-17c9-account-create-update-6xs6n" podUID="88d00cbf-6e28-4be5-abc2-6c77e76de81e" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.052777 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-8rdwx"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.055762 4842 generic.go:334] "Generic (PLEG): container finished" podID="34f55116-a518-4f21-8816-6f8232a6f68d" containerID="c593d09b2735487782551786767a4ed77fad095c2d0a78c5ed62f1b78de5ce7e" exitCode=143 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.055815 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"34f55116-a518-4f21-8816-6f8232a6f68d","Type":"ContainerDied","Data":"c593d09b2735487782551786767a4ed77fad095c2d0a78c5ed62f1b78de5ce7e"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.057128 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxl6n\" (UniqueName: \"kubernetes.io/projected/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-kube-api-access-pxl6n\") pod \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.057190 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-config\") pod \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.057769 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.057889 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-metrics-certs-tls-certs\") pod \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.057912 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-scripts\") pod \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.057953 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-ovsdb-rundir\") pod \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.058000 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-combined-ca-bundle\") pod \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.058020 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-ovsdbserver-nb-tls-certs\") pod \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.060389 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-config" (OuterVolumeSpecName: "config") pod "bff6dd37-52b7-41b4-bc15-4f6436cdabc7" (UID: "bff6dd37-52b7-41b4-bc15-4f6436cdabc7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.067364 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-8rdwx"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.067413 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8e42-account-create-update-pssf7"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.067859 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-scripts" (OuterVolumeSpecName: "scripts") pod "bff6dd37-52b7-41b4-bc15-4f6436cdabc7" (UID: "bff6dd37-52b7-41b4-bc15-4f6436cdabc7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.075167 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.076855 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "bff6dd37-52b7-41b4-bc15-4f6436cdabc7" (UID: "bff6dd37-52b7-41b4-bc15-4f6436cdabc7"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.078644 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "bff6dd37-52b7-41b4-bc15-4f6436cdabc7" (UID: "bff6dd37-52b7-41b4-bc15-4f6436cdabc7"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.087365 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-57cc9f4749-jxzrq"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.087624 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-57cc9f4749-jxzrq" podUID="f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd" containerName="barbican-worker-log" containerID="cri-o://2a1ff124f28b987212a2f7ed64a1bf208d310f3e9f13e80b4572c2dce5f8a5f9" gracePeriod=30 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.088030 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-57cc9f4749-jxzrq" podUID="f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd" containerName="barbican-worker" containerID="cri-o://36bc22b70997be0e1a4613b0f92eaab2935de0d49964ada65b21f18ae7b1478b" gracePeriod=30 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.089745 4842 generic.go:334] "Generic (PLEG): container finished" podID="6c96a7e1-78c3-449d-9200-735db4ee7086" containerID="baeb51b0b4bb9444bd98551a3cc3dcb68f182ab93c0b62223c4c0a0707790ceb" exitCode=143 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.089827 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c96a7e1-78c3-449d-9200-735db4ee7086","Type":"ContainerDied","Data":"baeb51b0b4bb9444bd98551a3cc3dcb68f182ab93c0b62223c4c0a0707790ceb"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.099546 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-kube-api-access-pxl6n" (OuterVolumeSpecName: "kube-api-access-pxl6n") pod "bff6dd37-52b7-41b4-bc15-4f6436cdabc7" (UID: "bff6dd37-52b7-41b4-bc15-4f6436cdabc7"). InnerVolumeSpecName "kube-api-access-pxl6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.103367 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5cf958d9d9-vvzkc"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.111133 4842 generic.go:334] "Generic (PLEG): container finished" podID="25609b1c-e1e9-4633-b3e3-93bd2f4396de" containerID="1f08602808f0c1da9b996db624f132bc20c5b91004db8c9c6f2ffa67741d3bbc" exitCode=143 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.111209 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"25609b1c-e1e9-4633-b3e3-93bd2f4396de","Type":"ContainerDied","Data":"1f08602808f0c1da9b996db624f132bc20c5b91004db8c9c6f2ffa67741d3bbc"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.118456 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-89ff-account-create-update-fbkfk" event={"ID":"8dad4bc1-b1ae-436c-925e-986d33b77e51","Type":"ContainerStarted","Data":"19b5b9e6138f019e100c7874a7e9ab2b0be50a7d46a7fd240461e516fb3462c0"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.135381 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5cc5c967fd-w6ljx"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.135597 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5cc5c967fd-w6ljx" podUID="eb022115-b53a-4ed0-a2a0-b44644dc26a7" containerName="barbican-api-log" containerID="cri-o://d4afe8e323946b2a091c267fa1099076188f1ad9d2a9b63f7930456fb99f3d8f" gracePeriod=30 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.135726 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5cc5c967fd-w6ljx" podUID="eb022115-b53a-4ed0-a2a0-b44644dc26a7" containerName="barbican-api" containerID="cri-o://83c2404b835485135c772ac74f310b1761d22ef1f63c10393be3a87c53fc66aa" gracePeriod=30 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.142811 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-654fdfd6b6-nrxvh"] Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.151028 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config-data kube-api-access-h5vs6], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/barbican-api-654fdfd6b6-nrxvh" podUID="72b63114-a275-4e32-9ad4-9f59e22151b3" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.159678 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.159696 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.159706 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxl6n\" (UniqueName: \"kubernetes.io/projected/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-kube-api-access-pxl6n\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.159716 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.159734 4842 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.165093 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="2b2ca532-dbbc-4148-8d2f-fc474685f0bd" containerName="rabbitmq" containerID="cri-o://384f2467730e80d894550b124ee5d4d50ba8cf40b6a9c5e38ab8a7bf9548ea2d" gracePeriod=604800 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.175498 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" event={"ID":"748756c2-ee60-42ce-835e-bfaa7007d7ac","Type":"ContainerStarted","Data":"09ed8d05d994b4f10b7eef605b2f606beee05a7896873233e85ba84f7bd5475e"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.176886 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "115a51a9-6125-46e1-a960-a66cb9957d38" (UID: "115a51a9-6125-46e1-a960-a66cb9957d38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.191503 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-77c4859bf4-qzmpm"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.191769 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" podUID="679e6e39-029a-452e-a375-bf0b937e3fbe" containerName="barbican-keystone-listener-log" containerID="cri-o://5a24327ba4517226f20e20f0a45585d27dd9a1490c6050d591f1638384be7d6d" gracePeriod=30 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.192147 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" podUID="679e6e39-029a-452e-a375-bf0b937e3fbe" containerName="barbican-keystone-listener" containerID="cri-o://aee85aee5516dd19e05e53144d572bf0aa1bff0b09c36ebb0b91fd8f463420c6" gracePeriod=30 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.206039 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-687b99dfd8-skrq6"] Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.219055 4842 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 02 07:09:17 crc kubenswrapper[4842]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Feb 02 07:09:17 crc kubenswrapper[4842]: Feb 02 07:09:17 crc kubenswrapper[4842]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 02 07:09:17 crc kubenswrapper[4842]: Feb 02 07:09:17 crc kubenswrapper[4842]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 02 07:09:17 crc kubenswrapper[4842]: Feb 02 07:09:17 crc kubenswrapper[4842]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 02 07:09:17 crc kubenswrapper[4842]: Feb 02 07:09:17 crc kubenswrapper[4842]: if [ -n "nova_api" ]; then Feb 02 07:09:17 crc kubenswrapper[4842]: GRANT_DATABASE="nova_api" Feb 02 07:09:17 crc kubenswrapper[4842]: else Feb 02 07:09:17 crc kubenswrapper[4842]: GRANT_DATABASE="*" Feb 02 07:09:17 crc kubenswrapper[4842]: fi Feb 02 07:09:17 crc kubenswrapper[4842]: Feb 02 07:09:17 crc kubenswrapper[4842]: # going for maximum compatibility here: Feb 02 07:09:17 crc kubenswrapper[4842]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 02 07:09:17 crc kubenswrapper[4842]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 02 07:09:17 crc kubenswrapper[4842]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 02 07:09:17 crc kubenswrapper[4842]: # support updates Feb 02 07:09:17 crc kubenswrapper[4842]: Feb 02 07:09:17 crc kubenswrapper[4842]: $MYSQL_CMD < logger="UnhandledError" Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.221644 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-89ff-account-create-update-fbkfk" podUID="8dad4bc1-b1ae-436c-925e-986d33b77e51" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.253886 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.254060 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1f94c60e-a4fc-4b7d-96cd-367d46a731c4" containerName="nova-scheduler-scheduler" containerID="cri-o://aa3abfa94e116973782248416ac6de3799758150d193f7dbb95e6a13e34381cc" gracePeriod=30 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.263708 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.281795 4842 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.300223 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-687b99dfd8-skrq6"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302646 4842 generic.go:334] "Generic (PLEG): container finished" podID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerID="c3ceba27f85cf9e18b4c96e9c35e3e830a3840e245ff37876679745418c599df" exitCode=0 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302674 4842 generic.go:334] "Generic (PLEG): container finished" podID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerID="11c87109b1d73f0312d44a7a194b500b7f7e551073a65468bc291891955fd1d1" exitCode=0 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302681 4842 generic.go:334] "Generic (PLEG): container finished" podID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerID="3accf74226bf0263e16fdcc906f97a58d41768cb604252689a8c7a9fac50f04f" exitCode=0 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302688 4842 generic.go:334] "Generic (PLEG): container finished" podID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerID="a6f0be0e71192334da01f394f7e0075f3ff472a60d737f40449f0c7c56b45801" exitCode=0 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302703 4842 generic.go:334] "Generic (PLEG): container finished" podID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerID="94a480917554fbdc9c94fdc240db04a25556fac19911eb5945a6838a7169e5f3" exitCode=0 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302710 4842 generic.go:334] "Generic (PLEG): container finished" podID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerID="98d05e29848a090df093dcb34910845ebd22086e918c4b510210550b0fcd98f9" exitCode=0 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302716 4842 generic.go:334] "Generic (PLEG): container finished" podID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerID="84a64916ad5a870dd2730290e371bd4ee7a327af7bfa716ae7b3457657e3b792" exitCode=0 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302722 4842 generic.go:334] "Generic (PLEG): container finished" podID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerID="78ea2470e0bb66602235ee6f953b1cb50c60bbf2dda3d60aa9ded3436730161c" exitCode=0 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302729 4842 generic.go:334] "Generic (PLEG): container finished" podID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerID="1864c37f5464bef32be4591740d73c6be777716e778338b57e2c23f30b098973" exitCode=0 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302737 4842 generic.go:334] "Generic (PLEG): container finished" podID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerID="81e3b07657ef3f1d8e0c81f783b14b3167b42779f998c664f2c184857a6ffc8b" exitCode=0 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302747 4842 generic.go:334] "Generic (PLEG): container finished" podID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerID="0579b6675bbca573212a34273ea354bc485d0dead5d30e277230eaf0ce0b9594" exitCode=0 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302830 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerDied","Data":"c3ceba27f85cf9e18b4c96e9c35e3e830a3840e245ff37876679745418c599df"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302863 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerDied","Data":"11c87109b1d73f0312d44a7a194b500b7f7e551073a65468bc291891955fd1d1"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302874 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerDied","Data":"3accf74226bf0263e16fdcc906f97a58d41768cb604252689a8c7a9fac50f04f"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302888 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerDied","Data":"a6f0be0e71192334da01f394f7e0075f3ff472a60d737f40449f0c7c56b45801"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302902 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerDied","Data":"94a480917554fbdc9c94fdc240db04a25556fac19911eb5945a6838a7169e5f3"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302934 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerDied","Data":"98d05e29848a090df093dcb34910845ebd22086e918c4b510210550b0fcd98f9"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302946 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerDied","Data":"84a64916ad5a870dd2730290e371bd4ee7a327af7bfa716ae7b3457657e3b792"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302955 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerDied","Data":"78ea2470e0bb66602235ee6f953b1cb50c60bbf2dda3d60aa9ded3436730161c"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302962 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerDied","Data":"1864c37f5464bef32be4591740d73c6be777716e778338b57e2c23f30b098973"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302974 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerDied","Data":"81e3b07657ef3f1d8e0c81f783b14b3167b42779f998c664f2c184857a6ffc8b"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302985 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerDied","Data":"0579b6675bbca573212a34273ea354bc485d0dead5d30e277230eaf0ce0b9594"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.307771 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.307958 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="4850512e-bbc8-468d-94ef-1d1be3b0b49c" containerName="nova-cell1-conductor-conductor" containerID="cri-o://b02a597eaa6f312a54cab57cb22a7ba5718d1a52db99c582f4e0031ffecbffc2" gracePeriod=30 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.310243 4842 generic.go:334] "Generic (PLEG): container finished" podID="6064786a-fa53-47a7-88ee-384cf70a86c6" containerID="e96862cf77fa128f12f3b9982dfad78848395bebaf2c0c3ff7a1cca181e725f0" exitCode=2 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.310285 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6064786a-fa53-47a7-88ee-384cf70a86c6","Type":"ContainerDied","Data":"e96862cf77fa128f12f3b9982dfad78848395bebaf2c0c3ff7a1cca181e725f0"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.311745 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pnj4n"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.312880 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-17c9-account-create-update-6xs6n" event={"ID":"88d00cbf-6e28-4be5-abc2-6c77e76de81e","Type":"ContainerStarted","Data":"595b44b024cc413350c4c52a2edd391699f6565dcef71575de95c9a8d45985fb"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.316504 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cf958d9d9-vvzkc" event={"ID":"f3d6691d-0283-4dd7-966d-ceba8bde7895","Type":"ContainerStarted","Data":"04882b818d128bc118fdd65d9db4d076517b460bcb504e4f555e0244313167cc"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.317267 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pnj4n"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.326275 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.326502 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="cbda1f81-b862-4ee7-84ce-590c353e4d5b" containerName="nova-cell0-conductor-conductor" containerID="cri-o://75df0dcbbbe53a8b55947d6010ee6f966cc34b098ea07e3b90fcd36b98f46fc4" gracePeriod=30 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.331848 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6htfz"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.336315 4842 generic.go:334] "Generic (PLEG): container finished" podID="115a51a9-6125-46e1-a960-a66cb9957d38" containerID="bfc6d5e3d20fcf147f2a351ad85a3e522f9d2e24e1de0ae3e5b2d48bdc682cbf" exitCode=0 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.336449 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.338278 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6htfz"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.342809 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"115a51a9-6125-46e1-a960-a66cb9957d38","Type":"ContainerDied","Data":"bfc6d5e3d20fcf147f2a351ad85a3e522f9d2e24e1de0ae3e5b2d48bdc682cbf"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.342842 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"115a51a9-6125-46e1-a960-a66cb9957d38","Type":"ContainerDied","Data":"d9adaa71516bc7f37ff65b80add9138abcfd4cb747d204e8aa686e59e5b9af28"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.342853 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.342880 4842 scope.go:117] "RemoveContainer" containerID="bfc6d5e3d20fcf147f2a351ad85a3e522f9d2e24e1de0ae3e5b2d48bdc682cbf" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.347423 4842 generic.go:334] "Generic (PLEG): container finished" podID="c56025ce-3772-435d-bdba-a4d1ba9d6e2f" containerID="6586c2e8f7af2e360086efaa4a8a6c6f2493d034bdc7ef3f3fa3fe1325d17da7" exitCode=143 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.347478 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b5c67fdbd-zsx96" event={"ID":"c56025ce-3772-435d-bdba-a4d1ba9d6e2f","Type":"ContainerDied","Data":"6586c2e8f7af2e360086efaa4a8a6c6f2493d034bdc7ef3f3fa3fe1325d17da7"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.349354 4842 generic.go:334] "Generic (PLEG): container finished" podID="590d1088-e964-43a6-b879-01c8b83d4147" containerID="7321f950b4c167a7b34d5c400d350da10c11bc84a859361985534a57f9758316" exitCode=137 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.350883 4842 generic.go:334] "Generic (PLEG): container finished" podID="82827ec9-ac05-41ab-988c-99083ccdb949" containerID="b1f4bec090a15a8f33492373710dad94faf1e40a938d6cc9e964fd93f07eecf3" exitCode=0 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.350917 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-8dp78" event={"ID":"82827ec9-ac05-41ab-988c-99083ccdb949","Type":"ContainerDied","Data":"b1f4bec090a15a8f33492373710dad94faf1e40a938d6cc9e964fd93f07eecf3"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.352808 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bff6dd37-52b7-41b4-bc15-4f6436cdabc7" (UID: "bff6dd37-52b7-41b4-bc15-4f6436cdabc7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.359466 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-89ff-account-create-update-fbkfk"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.362039 4842 generic.go:334] "Generic (PLEG): container finished" podID="953bf671-ca79-4208-9bab-672dc079dd82" containerID="69048ee01a49fa4ed888b0c135134e06af01f907b56780330edbc72e09136e83" exitCode=0 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.362104 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6684555597-gjtgz" event={"ID":"953bf671-ca79-4208-9bab-672dc079dd82","Type":"ContainerDied","Data":"69048ee01a49fa4ed888b0c135134e06af01f907b56780330edbc72e09136e83"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.369662 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.369690 4842 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.375463 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6cd00133afde786f3f39678d68f6c38b74703143640c9ef32412c8efe7f5aec9 is running failed: container process not found" containerID="6cd00133afde786f3f39678d68f6c38b74703143640c9ef32412c8efe7f5aec9" cmd=["/usr/bin/pidof","ovsdb-server"] Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.375879 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6cd00133afde786f3f39678d68f6c38b74703143640c9ef32412c8efe7f5aec9 is running failed: container process not found" containerID="6cd00133afde786f3f39678d68f6c38b74703143640c9ef32412c8efe7f5aec9" cmd=["/usr/bin/pidof","ovsdb-server"] Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.382022 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6cd00133afde786f3f39678d68f6c38b74703143640c9ef32412c8efe7f5aec9 is running failed: container process not found" containerID="6cd00133afde786f3f39678d68f6c38b74703143640c9ef32412c8efe7f5aec9" cmd=["/usr/bin/pidof","ovsdb-server"] Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.382097 4842 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6cd00133afde786f3f39678d68f6c38b74703143640c9ef32412c8efe7f5aec9 is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-sb-0" podUID="a31583c1-5fde-4763-a889-7257255fa217" containerName="ovsdbserver-sb" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.382499 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-17c9-account-create-update-6xs6n"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.384774 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a31583c1-5fde-4763-a889-7257255fa217/ovsdbserver-sb/0.log" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.384807 4842 generic.go:334] "Generic (PLEG): container finished" podID="a31583c1-5fde-4763-a889-7257255fa217" containerID="c2eb9657c42f955c0263cd3a4cee2ba4741ed6bed3e4fa84ae9f59564a660266" exitCode=2 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.384822 4842 generic.go:334] "Generic (PLEG): container finished" podID="a31583c1-5fde-4763-a889-7257255fa217" containerID="6cd00133afde786f3f39678d68f6c38b74703143640c9ef32412c8efe7f5aec9" exitCode=143 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.384861 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a31583c1-5fde-4763-a889-7257255fa217","Type":"ContainerDied","Data":"c2eb9657c42f955c0263cd3a4cee2ba4741ed6bed3e4fa84ae9f59564a660266"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.384879 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a31583c1-5fde-4763-a889-7257255fa217","Type":"ContainerDied","Data":"6cd00133afde786f3f39678d68f6c38b74703143640c9ef32412c8efe7f5aec9"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.389522 4842 generic.go:334] "Generic (PLEG): container finished" podID="54aa018a-3e7e-4c95-9c1d-387543ed5af0" containerID="415d21f9580ea68e52aa649eacebbe3550d2da28410a54eb695a4a912d91fbdd" exitCode=143 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.389568 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"54aa018a-3e7e-4c95-9c1d-387543ed5af0","Type":"ContainerDied","Data":"415d21f9580ea68e52aa649eacebbe3550d2da28410a54eb695a4a912d91fbdd"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.390637 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-config-data" (OuterVolumeSpecName: "config-data") pod "115a51a9-6125-46e1-a960-a66cb9957d38" (UID: "115a51a9-6125-46e1-a960-a66cb9957d38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.410715 4842 generic.go:334] "Generic (PLEG): container finished" podID="e467a49f-fdc1-4a9e-9907-4425f5ec6177" containerID="42408d707e9e2078b40d0e9f4ce34644fc07f209b2994b218bbf5f92d1f39ea7" exitCode=0 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.410790 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sgwrm" event={"ID":"e467a49f-fdc1-4a9e-9907-4425f5ec6177","Type":"ContainerDied","Data":"42408d707e9e2078b40d0e9f4ce34644fc07f209b2994b218bbf5f92d1f39ea7"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.417258 4842 generic.go:334] "Generic (PLEG): container finished" podID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" exitCode=0 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.417311 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vctt8" event={"ID":"ce6d1a00-c27b-418e-afa9-01c8c7802127","Type":"ContainerDied","Data":"a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.421352 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "bff6dd37-52b7-41b4-bc15-4f6436cdabc7" (UID: "bff6dd37-52b7-41b4-bc15-4f6436cdabc7"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.427372 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bff6dd37-52b7-41b4-bc15-4f6436cdabc7/ovsdbserver-nb/0.log" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.427492 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bff6dd37-52b7-41b4-bc15-4f6436cdabc7","Type":"ContainerDied","Data":"0b86eb955efed6c0beae4754f7a259bd87ec4d6377bfa3532f73d18514ea5e3d"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.427577 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.435252 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-4glck_a768c72b-df6d-463e-b085-996d7b910985/openstack-network-exporter/0.log" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.435287 4842 generic.go:334] "Generic (PLEG): container finished" podID="a768c72b-df6d-463e-b085-996d7b910985" containerID="a62e03cec1bb8e57732f90cf545c9f9612917cecf937c100e89f185e517fa7dd" exitCode=2 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.454076 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15fb5e79-8dd5-46ae-b8dd-6944cc810350" path="/var/lib/kubelet/pods/15fb5e79-8dd5-46ae-b8dd-6944cc810350/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.457668 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27c72b5c-16bb-4404-8c00-6b37ed7d9b47" path="/var/lib/kubelet/pods/27c72b5c-16bb-4404-8c00-6b37ed7d9b47/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.458180 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d8715fd-8755-4bd6-82a7-bf49d61e1779" path="/var/lib/kubelet/pods/2d8715fd-8755-4bd6-82a7-bf49d61e1779/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.458684 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31bf41ed-98c7-44ed-abba-93b74a546e71" path="/var/lib/kubelet/pods/31bf41ed-98c7-44ed-abba-93b74a546e71/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.460284 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38cfcc24-6854-414a-9d6c-4769e1366eb1" path="/var/lib/kubelet/pods/38cfcc24-6854-414a-9d6c-4769e1366eb1/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.460777 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f4b2578-8a31-4097-afd3-04bae6621094" path="/var/lib/kubelet/pods/3f4b2578-8a31-4097-afd3-04bae6621094/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.461279 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b414999-f3d0-4101-abe7-ed8c7747ce5f" path="/var/lib/kubelet/pods/4b414999-f3d0-4101-abe7-ed8c7747ce5f/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.462831 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6418a243-5699-42a3-8fab-d65c530c9951" path="/var/lib/kubelet/pods/6418a243-5699-42a3-8fab-d65c530c9951/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.463671 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80249ec8-3d5a-4020-bed2-83b8ecd32ab9" path="/var/lib/kubelet/pods/80249ec8-3d5a-4020-bed2-83b8ecd32ab9/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.464341 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="939ed5f9-679d-44c4-8282-d1404d98b420" path="/var/lib/kubelet/pods/939ed5f9-679d-44c4-8282-d1404d98b420/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.464819 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c852e5a-26fe-4905-8483-4619c280f9c0" path="/var/lib/kubelet/pods/9c852e5a-26fe-4905-8483-4619c280f9c0/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.466559 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1048c2f-1504-465a-b0fb-da368d25f0ff" path="/var/lib/kubelet/pods/a1048c2f-1504-465a-b0fb-da368d25f0ff/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.467512 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8cd42ce-4a62-486b-9571-58d789ca2d38" path="/var/lib/kubelet/pods/b8cd42ce-4a62-486b-9571-58d789ca2d38/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.468152 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c49955b5-5145-4939-91e5-280569e18a33" path="/var/lib/kubelet/pods/c49955b5-5145-4939-91e5-280569e18a33/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.469496 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c51cea52-ce54-4855-9d4c-97817c4cc6b0" path="/var/lib/kubelet/pods/c51cea52-ce54-4855-9d4c-97817c4cc6b0/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.470802 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf6c9856-8e0e-462e-a2bb-b21847078b54" path="/var/lib/kubelet/pods/cf6c9856-8e0e-462e-a2bb-b21847078b54/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.471404 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0854221-b7f1-4e7c-89bc-b9f14d1b29c2" path="/var/lib/kubelet/pods/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.471911 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d82484f3-c883-4c12-8ca1-6de8ead67139" path="/var/lib/kubelet/pods/d82484f3-c883-4c12-8ca1-6de8ead67139/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.472388 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.472422 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.472865 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9f1c72e-953b-45ba-ba69-c7574f82e8ad" path="/var/lib/kubelet/pods/d9f1c72e-953b-45ba-ba69-c7574f82e8ad/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.475013 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "bff6dd37-52b7-41b4-bc15-4f6436cdabc7" (UID: "bff6dd37-52b7-41b4-bc15-4f6436cdabc7"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.475958 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="441d47f7-e5dd-456f-b6fa-10a642be6742" containerName="rabbitmq" containerID="cri-o://3913ec835fcef00ab7ba5cfa0bb102b1d808857fbee96be0da99ede67f9672b5" gracePeriod=604800 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.476373 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1ffaeb5-5dc3-4ead-8b43-701f81a8c965" path="/var/lib/kubelet/pods/f1ffaeb5-5dc3-4ead-8b43-701f81a8c965/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.478131 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb013bc6-805e-43d5-95f8-98597c33fa9e" path="/var/lib/kubelet/pods/fb013bc6-805e-43d5-95f8-98597c33fa9e/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.479719 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fff8a308-89ab-409f-9053-6a363794df83" path="/var/lib/kubelet/pods/fff8a308-89ab-409f-9053-6a363794df83/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.480731 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4glck" event={"ID":"a768c72b-df6d-463e-b085-996d7b910985","Type":"ContainerDied","Data":"a62e03cec1bb8e57732f90cf545c9f9612917cecf937c100e89f185e517fa7dd"} Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.512321 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75df0dcbbbe53a8b55947d6010ee6f966cc34b098ea07e3b90fcd36b98f46fc4" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.513579 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75df0dcbbbe53a8b55947d6010ee6f966cc34b098ea07e3b90fcd36b98f46fc4" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.514539 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75df0dcbbbe53a8b55947d6010ee6f966cc34b098ea07e3b90fcd36b98f46fc4" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.514591 4842 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="cbda1f81-b862-4ee7-84ce-590c353e4d5b" containerName="nova-cell0-conductor-conductor" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.573760 4842 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.582701 4842 scope.go:117] "RemoveContainer" containerID="092ec23856ddf7c87f1db2b8f8dedaf3b76e7104cefaca2c00891af5dbd0e8ec" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.641909 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sgwrm" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.642122 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-659598d599-lpzh5"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.644596 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-659598d599-lpzh5" podUID="9eff2351-b4e8-43cf-a232-9c36cb11c130" containerName="proxy-httpd" containerID="cri-o://1e413e67564e718a498ac35eeced53092dbd9372163eaf63c69cfa47632f99ec" gracePeriod=30 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.644872 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-659598d599-lpzh5" podUID="9eff2351-b4e8-43cf-a232-9c36cb11c130" containerName="proxy-server" containerID="cri-o://49dfdfa99a47811582b530171bcdb672444bf58776e14b517fe66bf3f7abc750" gracePeriod=30 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.662604 4842 scope.go:117] "RemoveContainer" containerID="bfc6d5e3d20fcf147f2a351ad85a3e522f9d2e24e1de0ae3e5b2d48bdc682cbf" Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.664998 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfc6d5e3d20fcf147f2a351ad85a3e522f9d2e24e1de0ae3e5b2d48bdc682cbf\": container with ID starting with bfc6d5e3d20fcf147f2a351ad85a3e522f9d2e24e1de0ae3e5b2d48bdc682cbf not found: ID does not exist" containerID="bfc6d5e3d20fcf147f2a351ad85a3e522f9d2e24e1de0ae3e5b2d48bdc682cbf" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.665033 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfc6d5e3d20fcf147f2a351ad85a3e522f9d2e24e1de0ae3e5b2d48bdc682cbf"} err="failed to get container status \"bfc6d5e3d20fcf147f2a351ad85a3e522f9d2e24e1de0ae3e5b2d48bdc682cbf\": rpc error: code = NotFound desc = could not find container \"bfc6d5e3d20fcf147f2a351ad85a3e522f9d2e24e1de0ae3e5b2d48bdc682cbf\": container with ID starting with bfc6d5e3d20fcf147f2a351ad85a3e522f9d2e24e1de0ae3e5b2d48bdc682cbf not found: ID does not exist" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.665056 4842 scope.go:117] "RemoveContainer" containerID="092ec23856ddf7c87f1db2b8f8dedaf3b76e7104cefaca2c00891af5dbd0e8ec" Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.667837 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"092ec23856ddf7c87f1db2b8f8dedaf3b76e7104cefaca2c00891af5dbd0e8ec\": container with ID starting with 092ec23856ddf7c87f1db2b8f8dedaf3b76e7104cefaca2c00891af5dbd0e8ec not found: ID does not exist" containerID="092ec23856ddf7c87f1db2b8f8dedaf3b76e7104cefaca2c00891af5dbd0e8ec" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.667879 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"092ec23856ddf7c87f1db2b8f8dedaf3b76e7104cefaca2c00891af5dbd0e8ec"} err="failed to get container status \"092ec23856ddf7c87f1db2b8f8dedaf3b76e7104cefaca2c00891af5dbd0e8ec\": rpc error: code = NotFound desc = could not find container \"092ec23856ddf7c87f1db2b8f8dedaf3b76e7104cefaca2c00891af5dbd0e8ec\": container with ID starting with 092ec23856ddf7c87f1db2b8f8dedaf3b76e7104cefaca2c00891af5dbd0e8ec not found: ID does not exist" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.667902 4842 scope.go:117] "RemoveContainer" containerID="12cbd4046092af30937f505c373f7a1da7ef6152e4425d8dee20e3b127f7d573" Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.689187 4842 kuberuntime_gc.go:389] "Failed to remove container log dead symlink" err="remove /var/log/containers/ovsdbserver-nb-0_openstack_openstack-network-exporter-12cbd4046092af30937f505c373f7a1da7ef6152e4425d8dee20e3b127f7d573.log: no such file or directory" path="/var/log/containers/ovsdbserver-nb-0_openstack_openstack-network-exporter-12cbd4046092af30937f505c373f7a1da7ef6152e4425d8dee20e3b127f7d573.log" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.704113 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.753963 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.758841 4842 scope.go:117] "RemoveContainer" containerID="c1acee4708434e2281340e86c5dcc1aec94647c18fa79ec17661ad1f08020e9f" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.759870 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.770584 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-4glck_a768c72b-df6d-463e-b085-996d7b910985/openstack-network-exporter/0.log" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.770665 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.772404 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.796035 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a768c72b-df6d-463e-b085-996d7b910985-ovn-rundir\") pod \"a768c72b-df6d-463e-b085-996d7b910985\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.796109 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-ovsdbserver-nb\") pod \"82827ec9-ac05-41ab-988c-99083ccdb949\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.796161 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-run-ovn\") pod \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.796199 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a768c72b-df6d-463e-b085-996d7b910985-combined-ca-bundle\") pod \"a768c72b-df6d-463e-b085-996d7b910985\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797127 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a768c72b-df6d-463e-b085-996d7b910985-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "a768c72b-df6d-463e-b085-996d7b910985" (UID: "a768c72b-df6d-463e-b085-996d7b910985"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797144 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "e467a49f-fdc1-4a9e-9907-4425f5ec6177" (UID: "e467a49f-fdc1-4a9e-9907-4425f5ec6177"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797147 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a768c72b-df6d-463e-b085-996d7b910985-ovs-rundir\") pod \"a768c72b-df6d-463e-b085-996d7b910985\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797200 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a768c72b-df6d-463e-b085-996d7b910985-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "a768c72b-df6d-463e-b085-996d7b910985" (UID: "a768c72b-df6d-463e-b085-996d7b910985"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797275 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a768c72b-df6d-463e-b085-996d7b910985-config\") pod \"a768c72b-df6d-463e-b085-996d7b910985\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797335 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e467a49f-fdc1-4a9e-9907-4425f5ec6177-combined-ca-bundle\") pod \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797398 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-config\") pod \"82827ec9-ac05-41ab-988c-99083ccdb949\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797441 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a31583c1-5fde-4763-a889-7257255fa217/ovsdbserver-sb/0.log" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797507 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e467a49f-fdc1-4a9e-9907-4425f5ec6177-ovn-controller-tls-certs\") pod \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797517 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797533 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h79wj\" (UniqueName: \"kubernetes.io/projected/a768c72b-df6d-463e-b085-996d7b910985-kube-api-access-h79wj\") pod \"a768c72b-df6d-463e-b085-996d7b910985\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797560 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a768c72b-df6d-463e-b085-996d7b910985-metrics-certs-tls-certs\") pod \"a768c72b-df6d-463e-b085-996d7b910985\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797582 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-dns-svc\") pod \"82827ec9-ac05-41ab-988c-99083ccdb949\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797635 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-dns-swift-storage-0\") pod \"82827ec9-ac05-41ab-988c-99083ccdb949\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797669 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e467a49f-fdc1-4a9e-9907-4425f5ec6177-scripts\") pod \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797714 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-run\") pod \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797744 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw7kx\" (UniqueName: \"kubernetes.io/projected/e467a49f-fdc1-4a9e-9907-4425f5ec6177-kube-api-access-hw7kx\") pod \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797768 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vg6j8\" (UniqueName: \"kubernetes.io/projected/82827ec9-ac05-41ab-988c-99083ccdb949-kube-api-access-vg6j8\") pod \"82827ec9-ac05-41ab-988c-99083ccdb949\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797794 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-ovsdbserver-sb\") pod \"82827ec9-ac05-41ab-988c-99083ccdb949\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797815 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-log-ovn\") pod \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.798970 4842 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a768c72b-df6d-463e-b085-996d7b910985-ovn-rundir\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.799007 4842 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.798999 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e467a49f-fdc1-4a9e-9907-4425f5ec6177-scripts" (OuterVolumeSpecName: "scripts") pod "e467a49f-fdc1-4a9e-9907-4425f5ec6177" (UID: "e467a49f-fdc1-4a9e-9907-4425f5ec6177"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.799023 4842 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a768c72b-df6d-463e-b085-996d7b910985-ovs-rundir\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.799072 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "e467a49f-fdc1-4a9e-9907-4425f5ec6177" (UID: "e467a49f-fdc1-4a9e-9907-4425f5ec6177"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.799100 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-run" (OuterVolumeSpecName: "var-run") pod "e467a49f-fdc1-4a9e-9907-4425f5ec6177" (UID: "e467a49f-fdc1-4a9e-9907-4425f5ec6177"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.800422 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a768c72b-df6d-463e-b085-996d7b910985-config" (OuterVolumeSpecName: "config") pod "a768c72b-df6d-463e-b085-996d7b910985" (UID: "a768c72b-df6d-463e-b085-996d7b910985"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.817858 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.837409 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a768c72b-df6d-463e-b085-996d7b910985-kube-api-access-h79wj" (OuterVolumeSpecName: "kube-api-access-h79wj") pod "a768c72b-df6d-463e-b085-996d7b910985" (UID: "a768c72b-df6d-463e-b085-996d7b910985"). InnerVolumeSpecName "kube-api-access-h79wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.841195 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e467a49f-fdc1-4a9e-9907-4425f5ec6177-kube-api-access-hw7kx" (OuterVolumeSpecName: "kube-api-access-hw7kx") pod "e467a49f-fdc1-4a9e-9907-4425f5ec6177" (UID: "e467a49f-fdc1-4a9e-9907-4425f5ec6177"). InnerVolumeSpecName "kube-api-access-hw7kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.842042 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82827ec9-ac05-41ab-988c-99083ccdb949-kube-api-access-vg6j8" (OuterVolumeSpecName: "kube-api-access-vg6j8") pod "82827ec9-ac05-41ab-988c-99083ccdb949" (UID: "82827ec9-ac05-41ab-988c-99083ccdb949"). InnerVolumeSpecName "kube-api-access-vg6j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.875688 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a768c72b-df6d-463e-b085-996d7b910985-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a768c72b-df6d-463e-b085-996d7b910985" (UID: "a768c72b-df6d-463e-b085-996d7b910985"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.880396 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e467a49f-fdc1-4a9e-9907-4425f5ec6177-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e467a49f-fdc1-4a9e-9907-4425f5ec6177" (UID: "e467a49f-fdc1-4a9e-9907-4425f5ec6177"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.900149 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a31583c1-5fde-4763-a889-7257255fa217-ovsdb-rundir\") pod \"a31583c1-5fde-4763-a889-7257255fa217\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.901255 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a31583c1-5fde-4763-a889-7257255fa217-scripts\") pod \"a31583c1-5fde-4763-a889-7257255fa217\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.902116 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-metrics-certs-tls-certs\") pod \"a31583c1-5fde-4763-a889-7257255fa217\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.902189 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"a31583c1-5fde-4763-a889-7257255fa217\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.902305 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-ovsdbserver-sb-tls-certs\") pod \"a31583c1-5fde-4763-a889-7257255fa217\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.902426 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a31583c1-5fde-4763-a889-7257255fa217-config\") pod \"a31583c1-5fde-4763-a889-7257255fa217\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.902499 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzd26\" (UniqueName: \"kubernetes.io/projected/a31583c1-5fde-4763-a889-7257255fa217-kube-api-access-pzd26\") pod \"a31583c1-5fde-4763-a889-7257255fa217\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.902628 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-combined-ca-bundle\") pod \"a31583c1-5fde-4763-a889-7257255fa217\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.903132 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h79wj\" (UniqueName: \"kubernetes.io/projected/a768c72b-df6d-463e-b085-996d7b910985-kube-api-access-h79wj\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.903193 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e467a49f-fdc1-4a9e-9907-4425f5ec6177-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.903403 4842 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-run\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.903454 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw7kx\" (UniqueName: \"kubernetes.io/projected/e467a49f-fdc1-4a9e-9907-4425f5ec6177-kube-api-access-hw7kx\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.903501 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vg6j8\" (UniqueName: \"kubernetes.io/projected/82827ec9-ac05-41ab-988c-99083ccdb949-kube-api-access-vg6j8\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.903546 4842 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.903654 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a768c72b-df6d-463e-b085-996d7b910985-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.903716 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a768c72b-df6d-463e-b085-996d7b910985-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.903763 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e467a49f-fdc1-4a9e-9907-4425f5ec6177-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.901188 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a31583c1-5fde-4763-a889-7257255fa217-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "a31583c1-5fde-4763-a889-7257255fa217" (UID: "a31583c1-5fde-4763-a889-7257255fa217"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.901959 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31583c1-5fde-4763-a889-7257255fa217-scripts" (OuterVolumeSpecName: "scripts") pod "a31583c1-5fde-4763-a889-7257255fa217" (UID: "a31583c1-5fde-4763-a889-7257255fa217"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.903851 4842 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.904029 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-config-data podName:441d47f7-e5dd-456f-b6fa-10a642be6742 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:19.904013559 +0000 UTC m=+1385.281281471 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-config-data") pod "rabbitmq-cell1-server-0" (UID: "441d47f7-e5dd-456f-b6fa-10a642be6742") : configmap "rabbitmq-cell1-config-data" not found Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.904856 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31583c1-5fde-4763-a889-7257255fa217-config" (OuterVolumeSpecName: "config") pod "a31583c1-5fde-4763-a889-7257255fa217" (UID: "a31583c1-5fde-4763-a889-7257255fa217"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.911307 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "a31583c1-5fde-4763-a889-7257255fa217" (UID: "a31583c1-5fde-4763-a889-7257255fa217"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.930081 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31583c1-5fde-4763-a889-7257255fa217-kube-api-access-pzd26" (OuterVolumeSpecName: "kube-api-access-pzd26") pod "a31583c1-5fde-4763-a889-7257255fa217" (UID: "a31583c1-5fde-4763-a889-7257255fa217"). InnerVolumeSpecName "kube-api-access-pzd26". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.938561 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "82827ec9-ac05-41ab-988c-99083ccdb949" (UID: "82827ec9-ac05-41ab-988c-99083ccdb949"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.947826 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "82827ec9-ac05-41ab-988c-99083ccdb949" (UID: "82827ec9-ac05-41ab-988c-99083ccdb949"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.977732 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "82827ec9-ac05-41ab-988c-99083ccdb949" (UID: "82827ec9-ac05-41ab-988c-99083ccdb949"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.978183 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e467a49f-fdc1-4a9e-9907-4425f5ec6177-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "e467a49f-fdc1-4a9e-9907-4425f5ec6177" (UID: "e467a49f-fdc1-4a9e-9907-4425f5ec6177"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.986986 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "82827ec9-ac05-41ab-988c-99083ccdb949" (UID: "82827ec9-ac05-41ab-988c-99083ccdb949"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.992075 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-config" (OuterVolumeSpecName: "config") pod "82827ec9-ac05-41ab-988c-99083ccdb949" (UID: "82827ec9-ac05-41ab-988c-99083ccdb949"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.000373 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6befc904ad1bc362edb2452ad98dace7a8d19908d934b410bdb62de4fb72339d is running failed: container process not found" containerID="6befc904ad1bc362edb2452ad98dace7a8d19908d934b410bdb62de4fb72339d" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.005481 4842 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.005508 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.005520 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.005530 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a31583c1-5fde-4763-a889-7257255fa217-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.005539 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzd26\" (UniqueName: \"kubernetes.io/projected/a31583c1-5fde-4763-a889-7257255fa217-kube-api-access-pzd26\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.005548 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.005557 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a31583c1-5fde-4763-a889-7257255fa217-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.005565 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a31583c1-5fde-4763-a889-7257255fa217-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.005573 4842 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e467a49f-fdc1-4a9e-9907-4425f5ec6177-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.005581 4842 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.005590 4842 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.006026 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6befc904ad1bc362edb2452ad98dace7a8d19908d934b410bdb62de4fb72339d is running failed: container process not found" containerID="6befc904ad1bc362edb2452ad98dace7a8d19908d934b410bdb62de4fb72339d" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.006194 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a31583c1-5fde-4763-a889-7257255fa217" (UID: "a31583c1-5fde-4763-a889-7257255fa217"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.007298 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6befc904ad1bc362edb2452ad98dace7a8d19908d934b410bdb62de4fb72339d is running failed: container process not found" containerID="6befc904ad1bc362edb2452ad98dace7a8d19908d934b410bdb62de4fb72339d" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.007379 4842 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6befc904ad1bc362edb2452ad98dace7a8d19908d934b410bdb62de4fb72339d is running failed: container process not found" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="bed4dadb-b854-4082-b18a-67f58543bb9a" containerName="galera" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.026511 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "a31583c1-5fde-4763-a889-7257255fa217" (UID: "a31583c1-5fde-4763-a889-7257255fa217"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.032480 4842 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.038506 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a768c72b-df6d-463e-b085-996d7b910985-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "a768c72b-df6d-463e-b085-996d7b910985" (UID: "a768c72b-df6d-463e-b085-996d7b910985"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.097073 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "a31583c1-5fde-4763-a889-7257255fa217" (UID: "a31583c1-5fde-4763-a889-7257255fa217"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.148740 4842 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.148858 4842 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.148913 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.148962 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.149008 4842 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a768c72b-df6d-463e-b085-996d7b910985-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.162874 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7f00-account-create-update-wfvs9"] Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.173367 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.179387 4842 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 02 07:09:18 crc kubenswrapper[4842]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: if [ -n "nova_cell0" ]; then Feb 02 07:09:18 crc kubenswrapper[4842]: GRANT_DATABASE="nova_cell0" Feb 02 07:09:18 crc kubenswrapper[4842]: else Feb 02 07:09:18 crc kubenswrapper[4842]: GRANT_DATABASE="*" Feb 02 07:09:18 crc kubenswrapper[4842]: fi Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: # going for maximum compatibility here: Feb 02 07:09:18 crc kubenswrapper[4842]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 02 07:09:18 crc kubenswrapper[4842]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 02 07:09:18 crc kubenswrapper[4842]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 02 07:09:18 crc kubenswrapper[4842]: # support updates Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: $MYSQL_CMD < logger="UnhandledError" Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.181288 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-7f00-account-create-update-wfvs9" podUID="5130c998-8bfd-413c-887e-2100da96f6ce" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.441342 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.469417 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a31583c1-5fde-4763-a889-7257255fa217/ovsdbserver-sb/0.log" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.469489 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a31583c1-5fde-4763-a889-7257255fa217","Type":"ContainerDied","Data":"1455920f56b035102336b6030ca95115000c538e6e505a3b940faf00be0a7147"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.469527 4842 scope.go:117] "RemoveContainer" containerID="c2eb9657c42f955c0263cd3a4cee2ba4741ed6bed3e4fa84ae9f59564a660266" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.469639 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.477328 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7f00-account-create-update-wfvs9" event={"ID":"5130c998-8bfd-413c-887e-2100da96f6ce","Type":"ContainerStarted","Data":"edae9a46c8962c16de1f47c9594d864df221b1f93bbc0bdc1a42fba426cadc08"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.519550 4842 scope.go:117] "RemoveContainer" containerID="6cd00133afde786f3f39678d68f6c38b74703143640c9ef32412c8efe7f5aec9" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.519670 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-17c9-account-create-update-6xs6n" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.535165 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.557484 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.559315 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.559461 4842 generic.go:334] "Generic (PLEG): container finished" podID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerID="419e27de3686d1a75400d18f391cbe54519868631357cce324a86c057a1dbbfe" exitCode=0 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.559484 4842 generic.go:334] "Generic (PLEG): container finished" podID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerID="5fe6ac9847ee5629c3a3a2ccb929b05946534e86d95fae65cd97cbab654c7391" exitCode=0 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.559493 4842 generic.go:334] "Generic (PLEG): container finished" podID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerID="496f7c8f3a8e1190f069f9d123dad4f03c5ddc2c339a3a530d938ce75113f766" exitCode=0 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.559557 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerDied","Data":"419e27de3686d1a75400d18f391cbe54519868631357cce324a86c057a1dbbfe"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.559582 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerDied","Data":"5fe6ac9847ee5629c3a3a2ccb929b05946534e86d95fae65cd97cbab654c7391"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.559593 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerDied","Data":"496f7c8f3a8e1190f069f9d123dad4f03c5ddc2c339a3a530d938ce75113f766"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.568478 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="900b2d20-01c8-47e0-8271-ccfd8549d468" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.166:8776/healthcheck\": read tcp 10.217.0.2:36538->10.217.0.166:8776: read: connection reset by peer" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.571755 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/590d1088-e964-43a6-b879-01c8b83d4147-openstack-config-secret\") pod \"590d1088-e964-43a6-b879-01c8b83d4147\" (UID: \"590d1088-e964-43a6-b879-01c8b83d4147\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.572065 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/590d1088-e964-43a6-b879-01c8b83d4147-openstack-config\") pod \"590d1088-e964-43a6-b879-01c8b83d4147\" (UID: \"590d1088-e964-43a6-b879-01c8b83d4147\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.572288 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590d1088-e964-43a6-b879-01c8b83d4147-combined-ca-bundle\") pod \"590d1088-e964-43a6-b879-01c8b83d4147\" (UID: \"590d1088-e964-43a6-b879-01c8b83d4147\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.572530 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wz5x6\" (UniqueName: \"kubernetes.io/projected/590d1088-e964-43a6-b879-01c8b83d4147-kube-api-access-wz5x6\") pod \"590d1088-e964-43a6-b879-01c8b83d4147\" (UID: \"590d1088-e964-43a6-b879-01c8b83d4147\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.571763 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-2348-account-create-update-j8g5r"] Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.578075 4842 generic.go:334] "Generic (PLEG): container finished" podID="f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd" containerID="2a1ff124f28b987212a2f7ed64a1bf208d310f3e9f13e80b4572c2dce5f8a5f9" exitCode=143 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.578360 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-57cc9f4749-jxzrq" event={"ID":"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd","Type":"ContainerDied","Data":"2a1ff124f28b987212a2f7ed64a1bf208d310f3e9f13e80b4572c2dce5f8a5f9"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.595130 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.595966 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-4glck_a768c72b-df6d-463e-b085-996d7b910985/openstack-network-exporter/0.log" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.596827 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.597674 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/590d1088-e964-43a6-b879-01c8b83d4147-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "590d1088-e964-43a6-b879-01c8b83d4147" (UID: "590d1088-e964-43a6-b879-01c8b83d4147"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.597725 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4glck" event={"ID":"a768c72b-df6d-463e-b085-996d7b910985","Type":"ContainerDied","Data":"3895bf2e90ce68029a65e13b1b0d09c0d18f1338f9ff1f7787b7a618bced51a5"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.603437 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-716d-account-create-update-x4f2v"] Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.604126 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/590d1088-e964-43a6-b879-01c8b83d4147-kube-api-access-wz5x6" (OuterVolumeSpecName: "kube-api-access-wz5x6") pod "590d1088-e964-43a6-b879-01c8b83d4147" (UID: "590d1088-e964-43a6-b879-01c8b83d4147"). InnerVolumeSpecName "kube-api-access-wz5x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.612224 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bfdd-account-create-update-z7blt"] Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.623628 4842 scope.go:117] "RemoveContainer" containerID="a62e03cec1bb8e57732f90cf545c9f9612917cecf937c100e89f185e517fa7dd" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.630885 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" event={"ID":"748756c2-ee60-42ce-835e-bfaa7007d7ac","Type":"ContainerStarted","Data":"b52b688787922560d30dfe4b0b956a05a57d07b8c6d9016ccf7d37fd8f711081"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.630942 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" event={"ID":"748756c2-ee60-42ce-835e-bfaa7007d7ac","Type":"ContainerStarted","Data":"c802fa3028f8b2c2c2cefe528fbbb11245e3ea35edbed19c7f9407c4edba1398"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.631083 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" podUID="748756c2-ee60-42ce-835e-bfaa7007d7ac" containerName="barbican-keystone-listener-log" containerID="cri-o://c802fa3028f8b2c2c2cefe528fbbb11245e3ea35edbed19c7f9407c4edba1398" gracePeriod=30 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.631692 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" podUID="748756c2-ee60-42ce-835e-bfaa7007d7ac" containerName="barbican-keystone-listener" containerID="cri-o://b52b688787922560d30dfe4b0b956a05a57d07b8c6d9016ccf7d37fd8f711081" gracePeriod=30 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.643815 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sgwrm" event={"ID":"e467a49f-fdc1-4a9e-9907-4425f5ec6177","Type":"ContainerDied","Data":"e22d47c5687c2823a538f3e86888cac139c920a3eeed02648ed069882ffa70ad"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.643918 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sgwrm" Feb 02 07:09:18 crc kubenswrapper[4842]: W0202 07:09:18.650304 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode91519e6_bf55_4c08_8274_1d8a59f1ff52.slice/crio-16450eee390031a65a59938215b79e0eab96c41ea0a94add55f20f842e142b6e WatchSource:0}: Error finding container 16450eee390031a65a59938215b79e0eab96c41ea0a94add55f20f842e142b6e: Status 404 returned error can't find the container with id 16450eee390031a65a59938215b79e0eab96c41ea0a94add55f20f842e142b6e Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.652096 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590d1088-e964-43a6-b879-01c8b83d4147-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "590d1088-e964-43a6-b879-01c8b83d4147" (UID: "590d1088-e964-43a6-b879-01c8b83d4147"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.652358 4842 generic.go:334] "Generic (PLEG): container finished" podID="3a6e38b7-4a6d-4d93-af3d-5abac4efc44d" containerID="19ce3a33fe25413f4f312112bb88f2cc8ceb19171589dbec9313d4c51f900ca1" exitCode=0 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.652490 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d","Type":"ContainerDied","Data":"19ce3a33fe25413f4f312112bb88f2cc8ceb19171589dbec9313d4c51f900ca1"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.652568 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.661936 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.662060 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-8dp78" event={"ID":"82827ec9-ac05-41ab-988c-99083ccdb949","Type":"ContainerDied","Data":"3b795fd687296b78b29dffde7f9f5a14bcbd688f6a97aac6389de0b8b43b6094"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.662562 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590d1088-e964-43a6-b879-01c8b83d4147-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "590d1088-e964-43a6-b879-01c8b83d4147" (UID: "590d1088-e964-43a6-b879-01c8b83d4147"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.671342 4842 generic.go:334] "Generic (PLEG): container finished" podID="bed4dadb-b854-4082-b18a-67f58543bb9a" containerID="6befc904ad1bc362edb2452ad98dace7a8d19908d934b410bdb62de4fb72339d" exitCode=0 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.671400 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bed4dadb-b854-4082-b18a-67f58543bb9a","Type":"ContainerDied","Data":"6befc904ad1bc362edb2452ad98dace7a8d19908d934b410bdb62de4fb72339d"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.671491 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.674761 4842 generic.go:334] "Generic (PLEG): container finished" podID="b912e45d-72e7-4250-9757-add1efcfb054" containerID="9926781ae9dc15022af00f978a6d8014ea831a07a27df31142281c3ba8914507" exitCode=1 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.674825 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kl9p2" event={"ID":"b912e45d-72e7-4250-9757-add1efcfb054","Type":"ContainerDied","Data":"9926781ae9dc15022af00f978a6d8014ea831a07a27df31142281c3ba8914507"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.675048 4842 scope.go:117] "RemoveContainer" containerID="9926781ae9dc15022af00f978a6d8014ea831a07a27df31142281c3ba8914507" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.680868 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" podStartSLOduration=5.680852285 podStartE2EDuration="5.680852285s" podCreationTimestamp="2026-02-02 07:09:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:09:18.65756958 +0000 UTC m=+1384.034837492" watchObservedRunningTime="2026-02-02 07:09:18.680852285 +0000 UTC m=+1384.058120197" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.683137 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.687788 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm2d8\" (UniqueName: \"kubernetes.io/projected/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-kube-api-access-nm2d8\") pod \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.687826 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-config-data\") pod \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.687869 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bed4dadb-b854-4082-b18a-67f58543bb9a-galera-tls-certs\") pod \"bed4dadb-b854-4082-b18a-67f58543bb9a\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.687888 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-vencrypt-tls-certs\") pod \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.687955 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-nova-novncproxy-tls-certs\") pod \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.687989 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bed4dadb-b854-4082-b18a-67f58543bb9a-config-data-generated\") pod \"bed4dadb-b854-4082-b18a-67f58543bb9a\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.688060 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88d00cbf-6e28-4be5-abc2-6c77e76de81e-operator-scripts\") pod \"88d00cbf-6e28-4be5-abc2-6c77e76de81e\" (UID: \"88d00cbf-6e28-4be5-abc2-6c77e76de81e\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.688151 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-operator-scripts\") pod \"bed4dadb-b854-4082-b18a-67f58543bb9a\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.688304 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-combined-ca-bundle\") pod \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.688335 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"bed4dadb-b854-4082-b18a-67f58543bb9a\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.688363 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b6r6\" (UniqueName: \"kubernetes.io/projected/bed4dadb-b854-4082-b18a-67f58543bb9a-kube-api-access-8b6r6\") pod \"bed4dadb-b854-4082-b18a-67f58543bb9a\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.688383 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-config-data-default\") pod \"bed4dadb-b854-4082-b18a-67f58543bb9a\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.688441 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed4dadb-b854-4082-b18a-67f58543bb9a-combined-ca-bundle\") pod \"bed4dadb-b854-4082-b18a-67f58543bb9a\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.688471 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-kolla-config\") pod \"bed4dadb-b854-4082-b18a-67f58543bb9a\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.688517 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljflm\" (UniqueName: \"kubernetes.io/projected/88d00cbf-6e28-4be5-abc2-6c77e76de81e-kube-api-access-ljflm\") pod \"88d00cbf-6e28-4be5-abc2-6c77e76de81e\" (UID: \"88d00cbf-6e28-4be5-abc2-6c77e76de81e\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.688793 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.688897 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590d1088-e964-43a6-b879-01c8b83d4147-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.688908 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wz5x6\" (UniqueName: \"kubernetes.io/projected/590d1088-e964-43a6-b879-01c8b83d4147-kube-api-access-wz5x6\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.688919 4842 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/590d1088-e964-43a6-b879-01c8b83d4147-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.688927 4842 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/590d1088-e964-43a6-b879-01c8b83d4147-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.688999 4842 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.689055 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data podName:72b63114-a275-4e32-9ad4-9f59e22151b3 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:22.689025684 +0000 UTC m=+1388.066293596 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data") pod "barbican-api-654fdfd6b6-nrxvh" (UID: "72b63114-a275-4e32-9ad4-9f59e22151b3") : secret "barbican-config-data" not found Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.700755 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bed4dadb-b854-4082-b18a-67f58543bb9a" (UID: "bed4dadb-b854-4082-b18a-67f58543bb9a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.701328 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88d00cbf-6e28-4be5-abc2-6c77e76de81e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "88d00cbf-6e28-4be5-abc2-6c77e76de81e" (UID: "88d00cbf-6e28-4be5-abc2-6c77e76de81e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.702484 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "bed4dadb-b854-4082-b18a-67f58543bb9a" (UID: "bed4dadb-b854-4082-b18a-67f58543bb9a"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.702630 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bed4dadb-b854-4082-b18a-67f58543bb9a-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "bed4dadb-b854-4082-b18a-67f58543bb9a" (UID: "bed4dadb-b854-4082-b18a-67f58543bb9a"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.704301 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-4glck"] Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.704338 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-4glck"] Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.704580 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "bed4dadb-b854-4082-b18a-67f58543bb9a" (UID: "bed4dadb-b854-4082-b18a-67f58543bb9a"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.707395 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-kube-api-access-nm2d8" (OuterVolumeSpecName: "kube-api-access-nm2d8") pod "3a6e38b7-4a6d-4d93-af3d-5abac4efc44d" (UID: "3a6e38b7-4a6d-4d93-af3d-5abac4efc44d"). InnerVolumeSpecName "kube-api-access-nm2d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.709175 4842 generic.go:334] "Generic (PLEG): container finished" podID="9eff2351-b4e8-43cf-a232-9c36cb11c130" containerID="49dfdfa99a47811582b530171bcdb672444bf58776e14b517fe66bf3f7abc750" exitCode=0 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.709268 4842 generic.go:334] "Generic (PLEG): container finished" podID="9eff2351-b4e8-43cf-a232-9c36cb11c130" containerID="1e413e67564e718a498ac35eeced53092dbd9372163eaf63c69cfa47632f99ec" exitCode=0 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.709352 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-659598d599-lpzh5" event={"ID":"9eff2351-b4e8-43cf-a232-9c36cb11c130","Type":"ContainerDied","Data":"49dfdfa99a47811582b530171bcdb672444bf58776e14b517fe66bf3f7abc750"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.709426 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-659598d599-lpzh5" event={"ID":"9eff2351-b4e8-43cf-a232-9c36cb11c130","Type":"ContainerDied","Data":"1e413e67564e718a498ac35eeced53092dbd9372163eaf63c69cfa47632f99ec"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.719959 4842 generic.go:334] "Generic (PLEG): container finished" podID="eb022115-b53a-4ed0-a2a0-b44644dc26a7" containerID="d4afe8e323946b2a091c267fa1099076188f1ad9d2a9b63f7930456fb99f3d8f" exitCode=143 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.720019 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cc5c967fd-w6ljx" event={"ID":"eb022115-b53a-4ed0-a2a0-b44644dc26a7","Type":"ContainerDied","Data":"d4afe8e323946b2a091c267fa1099076188f1ad9d2a9b63f7930456fb99f3d8f"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.725538 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bed4dadb-b854-4082-b18a-67f58543bb9a-kube-api-access-8b6r6" (OuterVolumeSpecName: "kube-api-access-8b6r6") pod "bed4dadb-b854-4082-b18a-67f58543bb9a" (UID: "bed4dadb-b854-4082-b18a-67f58543bb9a"). InnerVolumeSpecName "kube-api-access-8b6r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.728506 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.736968 4842 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 02 07:09:18 crc kubenswrapper[4842]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: if [ -n "cinder" ]; then Feb 02 07:09:18 crc kubenswrapper[4842]: GRANT_DATABASE="cinder" Feb 02 07:09:18 crc kubenswrapper[4842]: else Feb 02 07:09:18 crc kubenswrapper[4842]: GRANT_DATABASE="*" Feb 02 07:09:18 crc kubenswrapper[4842]: fi Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: # going for maximum compatibility here: Feb 02 07:09:18 crc kubenswrapper[4842]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 02 07:09:18 crc kubenswrapper[4842]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 02 07:09:18 crc kubenswrapper[4842]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 02 07:09:18 crc kubenswrapper[4842]: # support updates Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: $MYSQL_CMD < logger="UnhandledError" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.737151 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.737408 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="ceilometer-central-agent" containerID="cri-o://454fd5e306d51498a984d5077e2446e7c6cf9f4c21170f227c52179104c4a621" gracePeriod=30 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.737508 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="sg-core" containerID="cri-o://4bae417047baf6bf846e8de15338ba7207499db97e8d990c0e70145588c621ef" gracePeriod=30 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.737539 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="ceilometer-notification-agent" containerID="cri-o://b1e2b0db828452447ced8622fe6dcff41213b22d66d8c13c96258aefe2a29db1" gracePeriod=30 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.737533 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="proxy-httpd" containerID="cri-o://bad70e2dba666c009e7972d01ff11c1b18b18e47b07343dcd24db229c935fcc3" gracePeriod=30 Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.743382 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"cinder-db-secret\\\" not found\"" pod="openstack/cinder-716d-account-create-update-x4f2v" podUID="e91519e6-bf55-4c08-8274-1d8a59f1ff52" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.747407 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.759499 4842 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 02 07:09:18 crc kubenswrapper[4842]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: if [ -n "glance" ]; then Feb 02 07:09:18 crc kubenswrapper[4842]: GRANT_DATABASE="glance" Feb 02 07:09:18 crc kubenswrapper[4842]: else Feb 02 07:09:18 crc kubenswrapper[4842]: GRANT_DATABASE="*" Feb 02 07:09:18 crc kubenswrapper[4842]: fi Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: # going for maximum compatibility here: Feb 02 07:09:18 crc kubenswrapper[4842]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 02 07:09:18 crc kubenswrapper[4842]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 02 07:09:18 crc kubenswrapper[4842]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 02 07:09:18 crc kubenswrapper[4842]: # support updates Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: $MYSQL_CMD < logger="UnhandledError" Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.765016 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-2348-account-create-update-j8g5r" podUID="81e3e639-93f4-48d1-8a2f-89e48bcc5f1d" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.774639 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.774849 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="6b11cfdf-ed7a-48ce-97eb-e03cd6be314c" containerName="kube-state-metrics" containerID="cri-o://75aec13501e8ac4a78490209fc3281c84b435ac2ebcc48667746bb6eb38e36e9" gracePeriod=30 Feb 02 07:09:18 crc kubenswrapper[4842]: W0202 07:09:18.781481 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90821e80_1367_4cf6_8087_fb83507223ec.slice/crio-6cb3fd3a05582a17982ba597c392cf5f579dd70cea15a2dd1fd0c7422d60a078 WatchSource:0}: Error finding container 6cb3fd3a05582a17982ba597c392cf5f579dd70cea15a2dd1fd0c7422d60a078: Status 404 returned error can't find the container with id 6cb3fd3a05582a17982ba597c392cf5f579dd70cea15a2dd1fd0c7422d60a078 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.785179 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88d00cbf-6e28-4be5-abc2-6c77e76de81e-kube-api-access-ljflm" (OuterVolumeSpecName: "kube-api-access-ljflm") pod "88d00cbf-6e28-4be5-abc2-6c77e76de81e" (UID: "88d00cbf-6e28-4be5-abc2-6c77e76de81e"). InnerVolumeSpecName "kube-api-access-ljflm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.797046 4842 scope.go:117] "RemoveContainer" containerID="42408d707e9e2078b40d0e9f4ce34644fc07f209b2994b218bbf5f92d1f39ea7" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.797593 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5vs6\" (UniqueName: \"kubernetes.io/projected/72b63114-a275-4e32-9ad4-9f59e22151b3-kube-api-access-h5vs6\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.797710 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b6r6\" (UniqueName: \"kubernetes.io/projected/bed4dadb-b854-4082-b18a-67f58543bb9a-kube-api-access-8b6r6\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.797722 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-config-data-default\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.797733 4842 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-kolla-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.797744 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljflm\" (UniqueName: \"kubernetes.io/projected/88d00cbf-6e28-4be5-abc2-6c77e76de81e-kube-api-access-ljflm\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.797755 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm2d8\" (UniqueName: \"kubernetes.io/projected/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-kube-api-access-nm2d8\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.797788 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bed4dadb-b854-4082-b18a-67f58543bb9a-config-data-generated\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.797800 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88d00cbf-6e28-4be5-abc2-6c77e76de81e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.797811 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.798385 4842 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.798423 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-config-data podName:2b2ca532-dbbc-4148-8d2f-fc474685f0bd nodeName:}" failed. No retries permitted until 2026-02-02 07:09:22.79840822 +0000 UTC m=+1388.175676122 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-config-data") pod "rabbitmq-server-0" (UID: "2b2ca532-dbbc-4148-8d2f-fc474685f0bd") : configmap "rabbitmq-config-data" not found Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.815827 4842 projected.go:194] Error preparing data for projected volume kube-api-access-h5vs6 for pod openstack/barbican-api-654fdfd6b6-nrxvh: failed to fetch token: serviceaccounts "barbican-barbican" not found Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.815885 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72b63114-a275-4e32-9ad4-9f59e22151b3-kube-api-access-h5vs6 podName:72b63114-a275-4e32-9ad4-9f59e22151b3 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:22.815868447 +0000 UTC m=+1388.193136359 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-h5vs6" (UniqueName: "kubernetes.io/projected/72b63114-a275-4e32-9ad4-9f59e22151b3-kube-api-access-h5vs6") pod "barbican-api-654fdfd6b6-nrxvh" (UID: "72b63114-a275-4e32-9ad4-9f59e22151b3") : failed to fetch token: serviceaccounts "barbican-barbican" not found Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.816210 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.816397 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "mysql-db") pod "bed4dadb-b854-4082-b18a-67f58543bb9a" (UID: "bed4dadb-b854-4082-b18a-67f58543bb9a"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.817480 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bed4dadb-b854-4082-b18a-67f58543bb9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bed4dadb-b854-4082-b18a-67f58543bb9a" (UID: "bed4dadb-b854-4082-b18a-67f58543bb9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.828374 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.828481 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.842407 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.842613 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.842636 4842 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-vctt8" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovsdb-server" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.842741 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.844822 4842 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 02 07:09:18 crc kubenswrapper[4842]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: if [ -n "neutron" ]; then Feb 02 07:09:18 crc kubenswrapper[4842]: GRANT_DATABASE="neutron" Feb 02 07:09:18 crc kubenswrapper[4842]: else Feb 02 07:09:18 crc kubenswrapper[4842]: GRANT_DATABASE="*" Feb 02 07:09:18 crc kubenswrapper[4842]: fi Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: # going for maximum compatibility here: Feb 02 07:09:18 crc kubenswrapper[4842]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 02 07:09:18 crc kubenswrapper[4842]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 02 07:09:18 crc kubenswrapper[4842]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 02 07:09:18 crc kubenswrapper[4842]: # support updates Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: $MYSQL_CMD < logger="UnhandledError" Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.846894 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"neutron-db-secret\\\" not found\"" pod="openstack/neutron-bfdd-account-create-update-z7blt" podUID="90821e80-1367-4cf6-8087-fb83507223ec" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.849386 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cf958d9d9-vvzkc" event={"ID":"f3d6691d-0283-4dd7-966d-ceba8bde7895","Type":"ContainerStarted","Data":"dac9b206e4e1335054c8c15fe13fa2bcf140fe9dec688f671a0584f1e29286b6"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.849525 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5cf958d9d9-vvzkc" podUID="f3d6691d-0283-4dd7-966d-ceba8bde7895" containerName="barbican-worker-log" containerID="cri-o://04882b818d128bc118fdd65d9db4d076517b460bcb504e4f555e0244313167cc" gracePeriod=30 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.849585 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5cf958d9d9-vvzkc" podUID="f3d6691d-0283-4dd7-966d-ceba8bde7895" containerName="barbican-worker" containerID="cri-o://dac9b206e4e1335054c8c15fe13fa2bcf140fe9dec688f671a0584f1e29286b6" gracePeriod=30 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.862406 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a6e38b7-4a6d-4d93-af3d-5abac4efc44d" (UID: "3a6e38b7-4a6d-4d93-af3d-5abac4efc44d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.870069 4842 generic.go:334] "Generic (PLEG): container finished" podID="679e6e39-029a-452e-a375-bf0b937e3fbe" containerID="5a24327ba4517226f20e20f0a45585d27dd9a1490c6050d591f1638384be7d6d" exitCode=143 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.870165 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.870742 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" event={"ID":"679e6e39-029a-452e-a375-bf0b937e3fbe","Type":"ContainerDied","Data":"5a24327ba4517226f20e20f0a45585d27dd9a1490c6050d591f1638384be7d6d"} Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.878616 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.878682 4842 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-vctt8" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovs-vswitchd" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.880232 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "3a6e38b7-4a6d-4d93-af3d-5abac4efc44d" (UID: "3a6e38b7-4a6d-4d93-af3d-5abac4efc44d"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.901464 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.901507 4842 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.901517 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed4dadb-b854-4082-b18a-67f58543bb9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.901526 4842 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.921267 4842 scope.go:117] "RemoveContainer" containerID="19ce3a33fe25413f4f312112bb88f2cc8ceb19171589dbec9313d4c51f900ca1" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.924452 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sgwrm"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.007023 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-sgwrm"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.013548 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-config-data" (OuterVolumeSpecName: "config-data") pod "3a6e38b7-4a6d-4d93-af3d-5abac4efc44d" (UID: "3a6e38b7-4a6d-4d93-af3d-5abac4efc44d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.018292 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.037386 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-8dp78"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.040109 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bed4dadb-b854-4082-b18a-67f58543bb9a-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "bed4dadb-b854-4082-b18a-67f58543bb9a" (UID: "bed4dadb-b854-4082-b18a-67f58543bb9a"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.046865 4842 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.059623 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "3a6e38b7-4a6d-4d93-af3d-5abac4efc44d" (UID: "3a6e38b7-4a6d-4d93-af3d-5abac4efc44d"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.077279 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-8dp78"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.111333 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.111626 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="2e4d672b-cb7a-406d-ab62-12745f300ef0" containerName="memcached" containerID="cri-o://95018804c3eeb98d3bc4dd01533eb47f23f9335fb411951096ec1c046e6c00c4" gracePeriod=30 Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.121257 4842 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bed4dadb-b854-4082-b18a-67f58543bb9a-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.121282 4842 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.121294 4842 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.177868 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-0ec7-account-create-update-x5rkz"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.186625 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-0ec7-account-create-update-x5rkz"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.222999 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-0ec7-account-create-update-9srfz"] Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.223359 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="115a51a9-6125-46e1-a960-a66cb9957d38" containerName="cinder-scheduler" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223370 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="115a51a9-6125-46e1-a960-a66cb9957d38" containerName="cinder-scheduler" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.223390 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e467a49f-fdc1-4a9e-9907-4425f5ec6177" containerName="ovn-controller" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223396 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="e467a49f-fdc1-4a9e-9907-4425f5ec6177" containerName="ovn-controller" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.223409 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bed4dadb-b854-4082-b18a-67f58543bb9a" containerName="galera" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223415 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="bed4dadb-b854-4082-b18a-67f58543bb9a" containerName="galera" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.223427 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a31583c1-5fde-4763-a889-7257255fa217" containerName="openstack-network-exporter" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223433 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="a31583c1-5fde-4763-a889-7257255fa217" containerName="openstack-network-exporter" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.223440 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bed4dadb-b854-4082-b18a-67f58543bb9a" containerName="mysql-bootstrap" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223445 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="bed4dadb-b854-4082-b18a-67f58543bb9a" containerName="mysql-bootstrap" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.223453 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bff6dd37-52b7-41b4-bc15-4f6436cdabc7" containerName="ovsdbserver-nb" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223459 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="bff6dd37-52b7-41b4-bc15-4f6436cdabc7" containerName="ovsdbserver-nb" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.223490 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82827ec9-ac05-41ab-988c-99083ccdb949" containerName="init" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223496 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="82827ec9-ac05-41ab-988c-99083ccdb949" containerName="init" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.223503 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82827ec9-ac05-41ab-988c-99083ccdb949" containerName="dnsmasq-dns" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223509 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="82827ec9-ac05-41ab-988c-99083ccdb949" containerName="dnsmasq-dns" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.223518 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a768c72b-df6d-463e-b085-996d7b910985" containerName="openstack-network-exporter" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223526 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="a768c72b-df6d-463e-b085-996d7b910985" containerName="openstack-network-exporter" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.223539 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bff6dd37-52b7-41b4-bc15-4f6436cdabc7" containerName="openstack-network-exporter" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223546 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="bff6dd37-52b7-41b4-bc15-4f6436cdabc7" containerName="openstack-network-exporter" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.223553 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="115a51a9-6125-46e1-a960-a66cb9957d38" containerName="probe" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223558 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="115a51a9-6125-46e1-a960-a66cb9957d38" containerName="probe" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.223568 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a6e38b7-4a6d-4d93-af3d-5abac4efc44d" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223573 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a6e38b7-4a6d-4d93-af3d-5abac4efc44d" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.223582 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a31583c1-5fde-4763-a889-7257255fa217" containerName="ovsdbserver-sb" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223588 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="a31583c1-5fde-4763-a889-7257255fa217" containerName="ovsdbserver-sb" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223745 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="a768c72b-df6d-463e-b085-996d7b910985" containerName="openstack-network-exporter" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223756 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="115a51a9-6125-46e1-a960-a66cb9957d38" containerName="probe" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223766 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="bff6dd37-52b7-41b4-bc15-4f6436cdabc7" containerName="openstack-network-exporter" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223778 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="a31583c1-5fde-4763-a889-7257255fa217" containerName="ovsdbserver-sb" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223788 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="a31583c1-5fde-4763-a889-7257255fa217" containerName="openstack-network-exporter" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223794 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="e467a49f-fdc1-4a9e-9907-4425f5ec6177" containerName="ovn-controller" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223804 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="bff6dd37-52b7-41b4-bc15-4f6436cdabc7" containerName="ovsdbserver-nb" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223814 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="bed4dadb-b854-4082-b18a-67f58543bb9a" containerName="galera" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223827 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="82827ec9-ac05-41ab-988c-99083ccdb949" containerName="dnsmasq-dns" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223837 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a6e38b7-4a6d-4d93-af3d-5abac4efc44d" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223842 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="115a51a9-6125-46e1-a960-a66cb9957d38" containerName="cinder-scheduler" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.224411 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0ec7-account-create-update-9srfz" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.225056 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5cf958d9d9-vvzkc" podStartSLOduration=6.225047335 podStartE2EDuration="6.225047335s" podCreationTimestamp="2026-02-02 07:09:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:09:18.931166864 +0000 UTC m=+1384.308434776" watchObservedRunningTime="2026-02-02 07:09:19.225047335 +0000 UTC m=+1384.602315247" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.228431 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.245930 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-z87kx"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.258118 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-z87kx"] Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.259406 4842 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 02 07:09:19 crc kubenswrapper[4842]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Feb 02 07:09:19 crc kubenswrapper[4842]: Feb 02 07:09:19 crc kubenswrapper[4842]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 02 07:09:19 crc kubenswrapper[4842]: Feb 02 07:09:19 crc kubenswrapper[4842]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 02 07:09:19 crc kubenswrapper[4842]: Feb 02 07:09:19 crc kubenswrapper[4842]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 02 07:09:19 crc kubenswrapper[4842]: Feb 02 07:09:19 crc kubenswrapper[4842]: if [ -n "barbican" ]; then Feb 02 07:09:19 crc kubenswrapper[4842]: GRANT_DATABASE="barbican" Feb 02 07:09:19 crc kubenswrapper[4842]: else Feb 02 07:09:19 crc kubenswrapper[4842]: GRANT_DATABASE="*" Feb 02 07:09:19 crc kubenswrapper[4842]: fi Feb 02 07:09:19 crc kubenswrapper[4842]: Feb 02 07:09:19 crc kubenswrapper[4842]: # going for maximum compatibility here: Feb 02 07:09:19 crc kubenswrapper[4842]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 02 07:09:19 crc kubenswrapper[4842]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 02 07:09:19 crc kubenswrapper[4842]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 02 07:09:19 crc kubenswrapper[4842]: # support updates Feb 02 07:09:19 crc kubenswrapper[4842]: Feb 02 07:09:19 crc kubenswrapper[4842]: $MYSQL_CMD < logger="UnhandledError" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.262344 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-8e42-account-create-update-pssf7" podUID="92090cd2-6d30-4aec-81a2-f7d41c40b52d" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.273106 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-xh7mg"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.281896 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-xh7mg"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.293605 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zllm7"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.295444 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.303859 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0ec7-account-create-update-9srfz"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.320497 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.325242 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db5059ce-9214-449d-a8d5-1b6ab7447e65-operator-scripts\") pod \"keystone-0ec7-account-create-update-9srfz\" (UID: \"db5059ce-9214-449d-a8d5-1b6ab7447e65\") " pod="openstack/keystone-0ec7-account-create-update-9srfz" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.326458 4842 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 02 07:09:19 crc kubenswrapper[4842]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Feb 02 07:09:19 crc kubenswrapper[4842]: Feb 02 07:09:19 crc kubenswrapper[4842]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 02 07:09:19 crc kubenswrapper[4842]: Feb 02 07:09:19 crc kubenswrapper[4842]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 02 07:09:19 crc kubenswrapper[4842]: Feb 02 07:09:19 crc kubenswrapper[4842]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 02 07:09:19 crc kubenswrapper[4842]: Feb 02 07:09:19 crc kubenswrapper[4842]: if [ -n "placement" ]; then Feb 02 07:09:19 crc kubenswrapper[4842]: GRANT_DATABASE="placement" Feb 02 07:09:19 crc kubenswrapper[4842]: else Feb 02 07:09:19 crc kubenswrapper[4842]: GRANT_DATABASE="*" Feb 02 07:09:19 crc kubenswrapper[4842]: fi Feb 02 07:09:19 crc kubenswrapper[4842]: Feb 02 07:09:19 crc kubenswrapper[4842]: # going for maximum compatibility here: Feb 02 07:09:19 crc kubenswrapper[4842]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 02 07:09:19 crc kubenswrapper[4842]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 02 07:09:19 crc kubenswrapper[4842]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 02 07:09:19 crc kubenswrapper[4842]: # support updates Feb 02 07:09:19 crc kubenswrapper[4842]: Feb 02 07:09:19 crc kubenswrapper[4842]: $MYSQL_CMD < logger="UnhandledError" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.329029 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-85ce-account-create-update-szhp5" podUID="79d5e0a1-8df4-4db1-aaf8-0d253163a522" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.352758 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw8v8\" (UniqueName: \"kubernetes.io/projected/db5059ce-9214-449d-a8d5-1b6ab7447e65-kube-api-access-jw8v8\") pod \"keystone-0ec7-account-create-update-9srfz\" (UID: \"db5059ce-9214-449d-a8d5-1b6ab7447e65\") " pod="openstack/keystone-0ec7-account-create-update-9srfz" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.356742 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zllm7"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.371918 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-cd7d86b6c-rcdjq"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.372150 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-cd7d86b6c-rcdjq" podUID="7343dd67-a085-4da9-8d79-f25ea1e20ca6" containerName="keystone-api" containerID="cri-o://4e6d71c03ef27703f095692cfb9e2c5680467263aa934bc2fe4e56b094edd765" gracePeriod=30 Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.402650 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.406269 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.435993 4842 scope.go:117] "RemoveContainer" containerID="b1f4bec090a15a8f33492373710dad94faf1e40a938d6cc9e964fd93f07eecf3" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.453195 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data-custom\") pod \"72b63114-a275-4e32-9ad4-9f59e22151b3\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.453304 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72b63114-a275-4e32-9ad4-9f59e22151b3-logs\") pod \"72b63114-a275-4e32-9ad4-9f59e22151b3\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.453348 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-public-tls-certs\") pod \"72b63114-a275-4e32-9ad4-9f59e22151b3\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.453764 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72b63114-a275-4e32-9ad4-9f59e22151b3-logs" (OuterVolumeSpecName: "logs") pod "72b63114-a275-4e32-9ad4-9f59e22151b3" (UID: "72b63114-a275-4e32-9ad4-9f59e22151b3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.453798 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-internal-tls-certs\") pod \"72b63114-a275-4e32-9ad4-9f59e22151b3\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.453825 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-combined-ca-bundle\") pod \"72b63114-a275-4e32-9ad4-9f59e22151b3\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.454058 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-utilities\") pod \"redhat-operators-zllm7\" (UID: \"02f0d774-dbe6-45d5-9ffa-64383c8be0d7\") " pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.454553 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f45s8\" (UniqueName: \"kubernetes.io/projected/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-kube-api-access-f45s8\") pod \"redhat-operators-zllm7\" (UID: \"02f0d774-dbe6-45d5-9ffa-64383c8be0d7\") " pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.454625 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db5059ce-9214-449d-a8d5-1b6ab7447e65-operator-scripts\") pod \"keystone-0ec7-account-create-update-9srfz\" (UID: \"db5059ce-9214-449d-a8d5-1b6ab7447e65\") " pod="openstack/keystone-0ec7-account-create-update-9srfz" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.454647 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-catalog-content\") pod \"redhat-operators-zllm7\" (UID: \"02f0d774-dbe6-45d5-9ffa-64383c8be0d7\") " pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.454709 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw8v8\" (UniqueName: \"kubernetes.io/projected/db5059ce-9214-449d-a8d5-1b6ab7447e65-kube-api-access-jw8v8\") pod \"keystone-0ec7-account-create-update-9srfz\" (UID: \"db5059ce-9214-449d-a8d5-1b6ab7447e65\") " pod="openstack/keystone-0ec7-account-create-update-9srfz" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.454855 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72b63114-a275-4e32-9ad4-9f59e22151b3-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.454955 4842 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.454999 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/db5059ce-9214-449d-a8d5-1b6ab7447e65-operator-scripts podName:db5059ce-9214-449d-a8d5-1b6ab7447e65 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:19.954982351 +0000 UTC m=+1385.332250263 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/db5059ce-9214-449d-a8d5-1b6ab7447e65-operator-scripts") pod "keystone-0ec7-account-create-update-9srfz" (UID: "db5059ce-9214-449d-a8d5-1b6ab7447e65") : configmap "openstack-scripts" not found Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.458567 4842 projected.go:194] Error preparing data for projected volume kube-api-access-jw8v8 for pod openstack/keystone-0ec7-account-create-update-9srfz: failed to fetch token: serviceaccounts "galera-openstack" not found Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.458629 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/db5059ce-9214-449d-a8d5-1b6ab7447e65-kube-api-access-jw8v8 podName:db5059ce-9214-449d-a8d5-1b6ab7447e65 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:19.958612334 +0000 UTC m=+1385.335880256 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jw8v8" (UniqueName: "kubernetes.io/projected/db5059ce-9214-449d-a8d5-1b6ab7447e65-kube-api-access-jw8v8") pod "keystone-0ec7-account-create-update-9srfz" (UID: "db5059ce-9214-449d-a8d5-1b6ab7447e65") : failed to fetch token: serviceaccounts "galera-openstack" not found Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.459660 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="115a51a9-6125-46e1-a960-a66cb9957d38" path="/var/lib/kubelet/pods/115a51a9-6125-46e1-a960-a66cb9957d38/volumes" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.463667 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "72b63114-a275-4e32-9ad4-9f59e22151b3" (UID: "72b63114-a275-4e32-9ad4-9f59e22151b3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.464513 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72b63114-a275-4e32-9ad4-9f59e22151b3" (UID: "72b63114-a275-4e32-9ad4-9f59e22151b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.465482 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "72b63114-a275-4e32-9ad4-9f59e22151b3" (UID: "72b63114-a275-4e32-9ad4-9f59e22151b3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.468202 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "72b63114-a275-4e32-9ad4-9f59e22151b3" (UID: "72b63114-a275-4e32-9ad4-9f59e22151b3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.478814 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="226a55ec-a7c1-4c34-953c-bb4e549b0fc5" path="/var/lib/kubelet/pods/226a55ec-a7c1-4c34-953c-bb4e549b0fc5/volumes" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.479555 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b89146d-a545-4525-8744-723e0d9248b5" path="/var/lib/kubelet/pods/3b89146d-a545-4525-8744-723e0d9248b5/volumes" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.480059 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="590d1088-e964-43a6-b879-01c8b83d4147" path="/var/lib/kubelet/pods/590d1088-e964-43a6-b879-01c8b83d4147/volumes" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.486528 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6601a68f-34a5-4629-ac74-97cb14e809f3" path="/var/lib/kubelet/pods/6601a68f-34a5-4629-ac74-97cb14e809f3/volumes" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.487076 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82827ec9-ac05-41ab-988c-99083ccdb949" path="/var/lib/kubelet/pods/82827ec9-ac05-41ab-988c-99083ccdb949/volumes" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.503380 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7f00-account-create-update-wfvs9" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.503607 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31583c1-5fde-4763-a889-7257255fa217" path="/var/lib/kubelet/pods/a31583c1-5fde-4763-a889-7257255fa217/volumes" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.508498 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a768c72b-df6d-463e-b085-996d7b910985" path="/var/lib/kubelet/pods/a768c72b-df6d-463e-b085-996d7b910985/volumes" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.513984 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bff6dd37-52b7-41b4-bc15-4f6436cdabc7" path="/var/lib/kubelet/pods/bff6dd37-52b7-41b4-bc15-4f6436cdabc7/volumes" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.516958 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e467a49f-fdc1-4a9e-9907-4425f5ec6177" path="/var/lib/kubelet/pods/e467a49f-fdc1-4a9e-9907-4425f5ec6177/volumes" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.526153 4842 scope.go:117] "RemoveContainer" containerID="8bb94b1491e283b01c189ac6006d3fc23945dfbdff62fb805e090497b073e7c4" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.536936 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.547885 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-0ec7-account-create-update-9srfz"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.555738 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eff2351-b4e8-43cf-a232-9c36cb11c130-run-httpd\") pod \"9eff2351-b4e8-43cf-a232-9c36cb11c130\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.555809 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eff2351-b4e8-43cf-a232-9c36cb11c130-log-httpd\") pod \"9eff2351-b4e8-43cf-a232-9c36cb11c130\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.555878 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5130c998-8bfd-413c-887e-2100da96f6ce-operator-scripts\") pod \"5130c998-8bfd-413c-887e-2100da96f6ce\" (UID: \"5130c998-8bfd-413c-887e-2100da96f6ce\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.555957 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9eff2351-b4e8-43cf-a232-9c36cb11c130-etc-swift\") pod \"9eff2351-b4e8-43cf-a232-9c36cb11c130\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.555984 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-config-data\") pod \"9eff2351-b4e8-43cf-a232-9c36cb11c130\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.556015 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-public-tls-certs\") pod \"9eff2351-b4e8-43cf-a232-9c36cb11c130\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.556039 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-internal-tls-certs\") pod \"9eff2351-b4e8-43cf-a232-9c36cb11c130\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.556057 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2cq2\" (UniqueName: \"kubernetes.io/projected/5130c998-8bfd-413c-887e-2100da96f6ce-kube-api-access-r2cq2\") pod \"5130c998-8bfd-413c-887e-2100da96f6ce\" (UID: \"5130c998-8bfd-413c-887e-2100da96f6ce\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.556093 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-combined-ca-bundle\") pod \"9eff2351-b4e8-43cf-a232-9c36cb11c130\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.556133 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqwsc\" (UniqueName: \"kubernetes.io/projected/9eff2351-b4e8-43cf-a232-9c36cb11c130-kube-api-access-pqwsc\") pod \"9eff2351-b4e8-43cf-a232-9c36cb11c130\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.556438 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-utilities\") pod \"redhat-operators-zllm7\" (UID: \"02f0d774-dbe6-45d5-9ffa-64383c8be0d7\") " pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.556451 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9eff2351-b4e8-43cf-a232-9c36cb11c130-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9eff2351-b4e8-43cf-a232-9c36cb11c130" (UID: "9eff2351-b4e8-43cf-a232-9c36cb11c130"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.556508 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f45s8\" (UniqueName: \"kubernetes.io/projected/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-kube-api-access-f45s8\") pod \"redhat-operators-zllm7\" (UID: \"02f0d774-dbe6-45d5-9ffa-64383c8be0d7\") " pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.556562 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-catalog-content\") pod \"redhat-operators-zllm7\" (UID: \"02f0d774-dbe6-45d5-9ffa-64383c8be0d7\") " pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.556681 4842 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eff2351-b4e8-43cf-a232-9c36cb11c130-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.556693 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.556704 4842 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.556713 4842 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.556721 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.557186 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-catalog-content\") pod \"redhat-operators-zllm7\" (UID: \"02f0d774-dbe6-45d5-9ffa-64383c8be0d7\") " pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.557913 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5130c998-8bfd-413c-887e-2100da96f6ce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5130c998-8bfd-413c-887e-2100da96f6ce" (UID: "5130c998-8bfd-413c-887e-2100da96f6ce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.558255 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-utilities\") pod \"redhat-operators-zllm7\" (UID: \"02f0d774-dbe6-45d5-9ffa-64383c8be0d7\") " pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.558896 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9eff2351-b4e8-43cf-a232-9c36cb11c130-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9eff2351-b4e8-43cf-a232-9c36cb11c130" (UID: "9eff2351-b4e8-43cf-a232-9c36cb11c130"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.561400 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eff2351-b4e8-43cf-a232-9c36cb11c130-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "9eff2351-b4e8-43cf-a232-9c36cb11c130" (UID: "9eff2351-b4e8-43cf-a232-9c36cb11c130"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.569377 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eff2351-b4e8-43cf-a232-9c36cb11c130-kube-api-access-pqwsc" (OuterVolumeSpecName: "kube-api-access-pqwsc") pod "9eff2351-b4e8-43cf-a232-9c36cb11c130" (UID: "9eff2351-b4e8-43cf-a232-9c36cb11c130"). InnerVolumeSpecName "kube-api-access-pqwsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.580324 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5130c998-8bfd-413c-887e-2100da96f6ce-kube-api-access-r2cq2" (OuterVolumeSpecName: "kube-api-access-r2cq2") pod "5130c998-8bfd-413c-887e-2100da96f6ce" (UID: "5130c998-8bfd-413c-887e-2100da96f6ce"). InnerVolumeSpecName "kube-api-access-r2cq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.589251 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f45s8\" (UniqueName: \"kubernetes.io/projected/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-kube-api-access-f45s8\") pod \"redhat-operators-zllm7\" (UID: \"02f0d774-dbe6-45d5-9ffa-64383c8be0d7\") " pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.602041 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.201:3000/\": dial tcp 10.217.0.201:3000: connect: connection refused" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.606235 4842 scope.go:117] "RemoveContainer" containerID="6befc904ad1bc362edb2452ad98dace7a8d19908d934b410bdb62de4fb72339d" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.626887 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-6ctcq"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.630596 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-89ff-account-create-update-fbkfk" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.631082 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-jw8v8 operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-0ec7-account-create-update-9srfz" podUID="db5059ce-9214-449d-a8d5-1b6ab7447e65" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.664640 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skr4t\" (UniqueName: \"kubernetes.io/projected/8dad4bc1-b1ae-436c-925e-986d33b77e51-kube-api-access-skr4t\") pod \"8dad4bc1-b1ae-436c-925e-986d33b77e51\" (UID: \"8dad4bc1-b1ae-436c-925e-986d33b77e51\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.664696 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8dad4bc1-b1ae-436c-925e-986d33b77e51-operator-scripts\") pod \"8dad4bc1-b1ae-436c-925e-986d33b77e51\" (UID: \"8dad4bc1-b1ae-436c-925e-986d33b77e51\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.665178 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5130c998-8bfd-413c-887e-2100da96f6ce-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.665194 4842 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9eff2351-b4e8-43cf-a232-9c36cb11c130-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.665205 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2cq2\" (UniqueName: \"kubernetes.io/projected/5130c998-8bfd-413c-887e-2100da96f6ce-kube-api-access-r2cq2\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.665227 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqwsc\" (UniqueName: \"kubernetes.io/projected/9eff2351-b4e8-43cf-a232-9c36cb11c130-kube-api-access-pqwsc\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.665236 4842 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eff2351-b4e8-43cf-a232-9c36cb11c130-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.665654 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dad4bc1-b1ae-436c-925e-986d33b77e51-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8dad4bc1-b1ae-436c-925e-986d33b77e51" (UID: "8dad4bc1-b1ae-436c-925e-986d33b77e51"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.676596 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-6ctcq"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.685201 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dad4bc1-b1ae-436c-925e-986d33b77e51-kube-api-access-skr4t" (OuterVolumeSpecName: "kube-api-access-skr4t") pod "8dad4bc1-b1ae-436c-925e-986d33b77e51" (UID: "8dad4bc1-b1ae-436c-925e-986d33b77e51"). InnerVolumeSpecName "kube-api-access-skr4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.686029 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-config-data" (OuterVolumeSpecName: "config-data") pod "9eff2351-b4e8-43cf-a232-9c36cb11c130" (UID: "9eff2351-b4e8-43cf-a232-9c36cb11c130"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.700325 4842 scope.go:117] "RemoveContainer" containerID="29807641fcc1ca11bd99ef7a60eab40eeea4379d7aa3a9b641c81ec27d1ba950" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.713924 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-kl9p2"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.717292 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9eff2351-b4e8-43cf-a232-9c36cb11c130" (UID: "9eff2351-b4e8-43cf-a232-9c36cb11c130"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.732967 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6b0de6a9b1a36bc3d2910cbd8bed0ec4d6b0a971b7c05c08ccf5a0c3fa8afa6c" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.733057 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8e42-account-create-update-pssf7"] Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.746090 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6b0de6a9b1a36bc3d2910cbd8bed0ec4d6b0a971b7c05c08ccf5a0c3fa8afa6c" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.748385 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6b0de6a9b1a36bc3d2910cbd8bed0ec4d6b0a971b7c05c08ccf5a0c3fa8afa6c" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.748417 4842 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="6064786a-fa53-47a7-88ee-384cf70a86c6" containerName="ovn-northd" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.749039 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.751890 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="709c39fb-802f-4690-89f6-41a717e7244c" containerName="galera" containerID="cri-o://c560cf8ca46605a269f576b719a4cf3ca939b8e2744573792764df19d7522c8c" gracePeriod=30 Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.754784 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.755670 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9eff2351-b4e8-43cf-a232-9c36cb11c130" (UID: "9eff2351-b4e8-43cf-a232-9c36cb11c130"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.766922 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/900b2d20-01c8-47e0-8271-ccfd8549d468-etc-machine-id\") pod \"900b2d20-01c8-47e0-8271-ccfd8549d468\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.766975 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/900b2d20-01c8-47e0-8271-ccfd8549d468-logs\") pod \"900b2d20-01c8-47e0-8271-ccfd8549d468\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.766994 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-scripts\") pod \"900b2d20-01c8-47e0-8271-ccfd8549d468\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.767045 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-combined-ca-bundle\") pod \"900b2d20-01c8-47e0-8271-ccfd8549d468\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.767070 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-internal-tls-certs\") pod \"900b2d20-01c8-47e0-8271-ccfd8549d468\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.767360 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8dad4bc1-b1ae-436c-925e-986d33b77e51-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.767379 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.767388 4842 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.767397 4842 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.767406 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skr4t\" (UniqueName: \"kubernetes.io/projected/8dad4bc1-b1ae-436c-925e-986d33b77e51-kube-api-access-skr4t\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.767942 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/900b2d20-01c8-47e0-8271-ccfd8549d468-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "900b2d20-01c8-47e0-8271-ccfd8549d468" (UID: "900b2d20-01c8-47e0-8271-ccfd8549d468"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.768360 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/900b2d20-01c8-47e0-8271-ccfd8549d468-logs" (OuterVolumeSpecName: "logs") pod "900b2d20-01c8-47e0-8271-ccfd8549d468" (UID: "900b2d20-01c8-47e0-8271-ccfd8549d468"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.782245 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-85ce-account-create-update-szhp5"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.783544 4842 scope.go:117] "RemoveContainer" containerID="7321f950b4c167a7b34d5c400d350da10c11bc84a859361985534a57f9758316" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.783955 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-scripts" (OuterVolumeSpecName: "scripts") pod "900b2d20-01c8-47e0-8271-ccfd8549d468" (UID: "900b2d20-01c8-47e0-8271-ccfd8549d468"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.809447 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.815483 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.832188 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "900b2d20-01c8-47e0-8271-ccfd8549d468" (UID: "900b2d20-01c8-47e0-8271-ccfd8549d468"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.842038 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.845947 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.864342 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9eff2351-b4e8-43cf-a232-9c36cb11c130" (UID: "9eff2351-b4e8-43cf-a232-9c36cb11c130"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.870969 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-config-data-custom\") pod \"900b2d20-01c8-47e0-8271-ccfd8549d468\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.871030 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-config-data\") pod \"900b2d20-01c8-47e0-8271-ccfd8549d468\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.871055 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fmp4\" (UniqueName: \"kubernetes.io/projected/900b2d20-01c8-47e0-8271-ccfd8549d468-kube-api-access-4fmp4\") pod \"900b2d20-01c8-47e0-8271-ccfd8549d468\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.871126 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-public-tls-certs\") pod \"900b2d20-01c8-47e0-8271-ccfd8549d468\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.871813 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.871827 4842 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/900b2d20-01c8-47e0-8271-ccfd8549d468-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.871837 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/900b2d20-01c8-47e0-8271-ccfd8549d468-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.871846 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.871855 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.895928 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "900b2d20-01c8-47e0-8271-ccfd8549d468" (UID: "900b2d20-01c8-47e0-8271-ccfd8549d468"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.900989 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85ce-account-create-update-szhp5" event={"ID":"79d5e0a1-8df4-4db1-aaf8-0d253163a522","Type":"ContainerStarted","Data":"92c5616de7100c6457ed5b0dcd602dadf7228bf9da3a33c8035d364e9130e12d"} Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.911857 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/900b2d20-01c8-47e0-8271-ccfd8549d468-kube-api-access-4fmp4" (OuterVolumeSpecName: "kube-api-access-4fmp4") pod "900b2d20-01c8-47e0-8271-ccfd8549d468" (UID: "900b2d20-01c8-47e0-8271-ccfd8549d468"). InnerVolumeSpecName "kube-api-access-4fmp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.918598 4842 generic.go:334] "Generic (PLEG): container finished" podID="f3d6691d-0283-4dd7-966d-ceba8bde7895" containerID="04882b818d128bc118fdd65d9db4d076517b460bcb504e4f555e0244313167cc" exitCode=143 Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.918646 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cf958d9d9-vvzkc" event={"ID":"f3d6691d-0283-4dd7-966d-ceba8bde7895","Type":"ContainerDied","Data":"04882b818d128bc118fdd65d9db4d076517b460bcb504e4f555e0244313167cc"} Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.925793 4842 generic.go:334] "Generic (PLEG): container finished" podID="54aa018a-3e7e-4c95-9c1d-387543ed5af0" containerID="c6b2aef7c5907fec1f821bb206e985dfa1c10ebd9ed998f2f05ec13c6cf132ab" exitCode=0 Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.925843 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"54aa018a-3e7e-4c95-9c1d-387543ed5af0","Type":"ContainerDied","Data":"c6b2aef7c5907fec1f821bb206e985dfa1c10ebd9ed998f2f05ec13c6cf132ab"} Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.931771 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8e42-account-create-update-pssf7" event={"ID":"92090cd2-6d30-4aec-81a2-f7d41c40b52d","Type":"ContainerStarted","Data":"841933402afec6053b59c1d117b644948866331ecb15d4942a1241af82efdbd6"} Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.939536 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "900b2d20-01c8-47e0-8271-ccfd8549d468" (UID: "900b2d20-01c8-47e0-8271-ccfd8549d468"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.941028 4842 generic.go:334] "Generic (PLEG): container finished" podID="900b2d20-01c8-47e0-8271-ccfd8549d468" containerID="35494b429ef02861ccac7eb4515711429c34dfc143b4a511f2c7253734f037ab" exitCode=0 Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.941087 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"900b2d20-01c8-47e0-8271-ccfd8549d468","Type":"ContainerDied","Data":"35494b429ef02861ccac7eb4515711429c34dfc143b4a511f2c7253734f037ab"} Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.941113 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"900b2d20-01c8-47e0-8271-ccfd8549d468","Type":"ContainerDied","Data":"f8428d2a8e93132509de41794f4b8946214003b09ad9c320fa782cef8d54fe76"} Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.941130 4842 scope.go:117] "RemoveContainer" containerID="35494b429ef02861ccac7eb4515711429c34dfc143b4a511f2c7253734f037ab" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.941279 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.959985 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-659598d599-lpzh5" event={"ID":"9eff2351-b4e8-43cf-a232-9c36cb11c130","Type":"ContainerDied","Data":"c97160040d0350fa9bd5e1bbc3b5084d4e4f379ea92abc97f8017a5311a0c9cf"} Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.960066 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.972489 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db5059ce-9214-449d-a8d5-1b6ab7447e65-operator-scripts\") pod \"keystone-0ec7-account-create-update-9srfz\" (UID: \"db5059ce-9214-449d-a8d5-1b6ab7447e65\") " pod="openstack/keystone-0ec7-account-create-update-9srfz" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.972768 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw8v8\" (UniqueName: \"kubernetes.io/projected/db5059ce-9214-449d-a8d5-1b6ab7447e65-kube-api-access-jw8v8\") pod \"keystone-0ec7-account-create-update-9srfz\" (UID: \"db5059ce-9214-449d-a8d5-1b6ab7447e65\") " pod="openstack/keystone-0ec7-account-create-update-9srfz" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.972906 4842 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.972964 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.973015 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fmp4\" (UniqueName: \"kubernetes.io/projected/900b2d20-01c8-47e0-8271-ccfd8549d468-kube-api-access-4fmp4\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.972992 4842 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.973171 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/db5059ce-9214-449d-a8d5-1b6ab7447e65-operator-scripts podName:db5059ce-9214-449d-a8d5-1b6ab7447e65 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:20.973155246 +0000 UTC m=+1386.350423158 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/db5059ce-9214-449d-a8d5-1b6ab7447e65-operator-scripts") pod "keystone-0ec7-account-create-update-9srfz" (UID: "db5059ce-9214-449d-a8d5-1b6ab7447e65") : configmap "openstack-scripts" not found Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.973434 4842 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.973502 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-config-data podName:441d47f7-e5dd-456f-b6fa-10a642be6742 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:23.973485904 +0000 UTC m=+1389.350753816 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-config-data") pod "rabbitmq-cell1-server-0" (UID: "441d47f7-e5dd-456f-b6fa-10a642be6742") : configmap "rabbitmq-cell1-config-data" not found Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.975804 4842 projected.go:194] Error preparing data for projected volume kube-api-access-jw8v8 for pod openstack/keystone-0ec7-account-create-update-9srfz: failed to fetch token: serviceaccounts "galera-openstack" not found Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.997640 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/db5059ce-9214-449d-a8d5-1b6ab7447e65-kube-api-access-jw8v8 podName:db5059ce-9214-449d-a8d5-1b6ab7447e65 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:20.997610861 +0000 UTC m=+1386.374878773 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jw8v8" (UniqueName: "kubernetes.io/projected/db5059ce-9214-449d-a8d5-1b6ab7447e65-kube-api-access-jw8v8") pod "keystone-0ec7-account-create-update-9srfz" (UID: "db5059ce-9214-449d-a8d5-1b6ab7447e65") : failed to fetch token: serviceaccounts "galera-openstack" not found Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.003299 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-config-data" (OuterVolumeSpecName: "config-data") pod "900b2d20-01c8-47e0-8271-ccfd8549d468" (UID: "900b2d20-01c8-47e0-8271-ccfd8549d468"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.051500 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "900b2d20-01c8-47e0-8271-ccfd8549d468" (UID: "900b2d20-01c8-47e0-8271-ccfd8549d468"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.067382 4842 generic.go:334] "Generic (PLEG): container finished" podID="6b11cfdf-ed7a-48ce-97eb-e03cd6be314c" containerID="75aec13501e8ac4a78490209fc3281c84b435ac2ebcc48667746bb6eb38e36e9" exitCode=2 Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.067611 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c","Type":"ContainerDied","Data":"75aec13501e8ac4a78490209fc3281c84b435ac2ebcc48667746bb6eb38e36e9"} Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.071076 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-17c9-account-create-update-6xs6n" event={"ID":"88d00cbf-6e28-4be5-abc2-6c77e76de81e","Type":"ContainerDied","Data":"595b44b024cc413350c4c52a2edd391699f6565dcef71575de95c9a8d45985fb"} Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.071095 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-17c9-account-create-update-6xs6n" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.075995 4842 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.076323 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.085545 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-716d-account-create-update-x4f2v" event={"ID":"e91519e6-bf55-4c08-8274-1d8a59f1ff52","Type":"ContainerStarted","Data":"16450eee390031a65a59938215b79e0eab96c41ea0a94add55f20f842e142b6e"} Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.145553 4842 generic.go:334] "Generic (PLEG): container finished" podID="748756c2-ee60-42ce-835e-bfaa7007d7ac" containerID="c802fa3028f8b2c2c2cefe528fbbb11245e3ea35edbed19c7f9407c4edba1398" exitCode=143 Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.145636 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" event={"ID":"748756c2-ee60-42ce-835e-bfaa7007d7ac","Type":"ContainerDied","Data":"c802fa3028f8b2c2c2cefe528fbbb11245e3ea35edbed19c7f9407c4edba1398"} Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.174424 4842 generic.go:334] "Generic (PLEG): container finished" podID="25609b1c-e1e9-4633-b3e3-93bd2f4396de" containerID="bebe8c74ad90a2dc028ad9e30942ced9f67c8af8df16026b5b89379d97e80e00" exitCode=0 Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.174500 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"25609b1c-e1e9-4633-b3e3-93bd2f4396de","Type":"ContainerDied","Data":"bebe8c74ad90a2dc028ad9e30942ced9f67c8af8df16026b5b89379d97e80e00"} Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.204813 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kl9p2" event={"ID":"b912e45d-72e7-4250-9757-add1efcfb054","Type":"ContainerStarted","Data":"13000d6307279a8f1879b7fd7be84a407943a9cc3066fff0cf9a626a1678f240"} Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.205481 4842 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-kl9p2" secret="" err="secret \"galera-openstack-dockercfg-xfhgf\" not found" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.205511 4842 scope.go:117] "RemoveContainer" containerID="13000d6307279a8f1879b7fd7be84a407943a9cc3066fff0cf9a626a1678f240" Feb 02 07:09:20 crc kubenswrapper[4842]: E0202 07:09:20.205882 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-create-update pod=root-account-create-update-kl9p2_openstack(b912e45d-72e7-4250-9757-add1efcfb054)\"" pod="openstack/root-account-create-update-kl9p2" podUID="b912e45d-72e7-4250-9757-add1efcfb054" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.234532 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7f00-account-create-update-wfvs9" event={"ID":"5130c998-8bfd-413c-887e-2100da96f6ce","Type":"ContainerDied","Data":"edae9a46c8962c16de1f47c9594d864df221b1f93bbc0bdc1a42fba426cadc08"} Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.234649 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7f00-account-create-update-wfvs9" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.259392 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bfdd-account-create-update-z7blt" event={"ID":"90821e80-1367-4cf6-8087-fb83507223ec","Type":"ContainerStarted","Data":"6cb3fd3a05582a17982ba597c392cf5f579dd70cea15a2dd1fd0c7422d60a078"} Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.304786 4842 scope.go:117] "RemoveContainer" containerID="bd926e0b40deedf62e76e58772126de2d573692a9f905d9665b40c94008fd070" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.320861 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2348-account-create-update-j8g5r" event={"ID":"81e3e639-93f4-48d1-8a2f-89e48bcc5f1d","Type":"ContainerStarted","Data":"f55c42fda20e7505f223b55e3afbf9284af6c4d7c17fcc411b0d5c1ee7acf9ca"} Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.335733 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.337658 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-89ff-account-create-update-fbkfk" event={"ID":"8dad4bc1-b1ae-436c-925e-986d33b77e51","Type":"ContainerDied","Data":"19b5b9e6138f019e100c7874a7e9ab2b0be50a7d46a7fd240461e516fb3462c0"} Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.337808 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-89ff-account-create-update-fbkfk" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.377355 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7f00-account-create-update-wfvs9"] Feb 02 07:09:20 crc kubenswrapper[4842]: E0202 07:09:20.388973 4842 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 02 07:09:20 crc kubenswrapper[4842]: E0202 07:09:20.389027 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b912e45d-72e7-4250-9757-add1efcfb054-operator-scripts podName:b912e45d-72e7-4250-9757-add1efcfb054 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:20.889011945 +0000 UTC m=+1386.266279857 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b912e45d-72e7-4250-9757-add1efcfb054-operator-scripts") pod "root-account-create-update-kl9p2" (UID: "b912e45d-72e7-4250-9757-add1efcfb054") : configmap "openstack-scripts" not found Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.397150 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.405945 4842 generic.go:334] "Generic (PLEG): container finished" podID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerID="bad70e2dba666c009e7972d01ff11c1b18b18e47b07343dcd24db229c935fcc3" exitCode=0 Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.405990 4842 generic.go:334] "Generic (PLEG): container finished" podID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerID="4bae417047baf6bf846e8de15338ba7207499db97e8d990c0e70145588c621ef" exitCode=2 Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.406000 4842 generic.go:334] "Generic (PLEG): container finished" podID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerID="454fd5e306d51498a984d5077e2446e7c6cf9f4c21170f227c52179104c4a621" exitCode=0 Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.406079 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0ec7-account-create-update-9srfz" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.406754 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-7f00-account-create-update-wfvs9"] Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.406792 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"174fcd53-40ab-4d19-a317-bc5cd117d2a4","Type":"ContainerDied","Data":"bad70e2dba666c009e7972d01ff11c1b18b18e47b07343dcd24db229c935fcc3"} Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.406815 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"174fcd53-40ab-4d19-a317-bc5cd117d2a4","Type":"ContainerDied","Data":"4bae417047baf6bf846e8de15338ba7207499db97e8d990c0e70145588c621ef"} Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.406826 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"174fcd53-40ab-4d19-a317-bc5cd117d2a4","Type":"ContainerDied","Data":"454fd5e306d51498a984d5077e2446e7c6cf9f4c21170f227c52179104c4a621"} Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.406893 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.442820 4842 scope.go:117] "RemoveContainer" containerID="35494b429ef02861ccac7eb4515711429c34dfc143b4a511f2c7253734f037ab" Feb 02 07:09:20 crc kubenswrapper[4842]: E0202 07:09:20.444358 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35494b429ef02861ccac7eb4515711429c34dfc143b4a511f2c7253734f037ab\": container with ID starting with 35494b429ef02861ccac7eb4515711429c34dfc143b4a511f2c7253734f037ab not found: ID does not exist" containerID="35494b429ef02861ccac7eb4515711429c34dfc143b4a511f2c7253734f037ab" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.444388 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35494b429ef02861ccac7eb4515711429c34dfc143b4a511f2c7253734f037ab"} err="failed to get container status \"35494b429ef02861ccac7eb4515711429c34dfc143b4a511f2c7253734f037ab\": rpc error: code = NotFound desc = could not find container \"35494b429ef02861ccac7eb4515711429c34dfc143b4a511f2c7253734f037ab\": container with ID starting with 35494b429ef02861ccac7eb4515711429c34dfc143b4a511f2c7253734f037ab not found: ID does not exist" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.444408 4842 scope.go:117] "RemoveContainer" containerID="bd926e0b40deedf62e76e58772126de2d573692a9f905d9665b40c94008fd070" Feb 02 07:09:20 crc kubenswrapper[4842]: E0202 07:09:20.446250 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd926e0b40deedf62e76e58772126de2d573692a9f905d9665b40c94008fd070\": container with ID starting with bd926e0b40deedf62e76e58772126de2d573692a9f905d9665b40c94008fd070 not found: ID does not exist" containerID="bd926e0b40deedf62e76e58772126de2d573692a9f905d9665b40c94008fd070" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.446277 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd926e0b40deedf62e76e58772126de2d573692a9f905d9665b40c94008fd070"} err="failed to get container status \"bd926e0b40deedf62e76e58772126de2d573692a9f905d9665b40c94008fd070\": rpc error: code = NotFound desc = could not find container \"bd926e0b40deedf62e76e58772126de2d573692a9f905d9665b40c94008fd070\": container with ID starting with bd926e0b40deedf62e76e58772126de2d573692a9f905d9665b40c94008fd070 not found: ID does not exist" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.446295 4842 scope.go:117] "RemoveContainer" containerID="49dfdfa99a47811582b530171bcdb672444bf58776e14b517fe66bf3f7abc750" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.475500 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0ec7-account-create-update-9srfz" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.491704 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz5c2\" (UniqueName: \"kubernetes.io/projected/54aa018a-3e7e-4c95-9c1d-387543ed5af0-kube-api-access-kz5c2\") pod \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.491873 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x268\" (UniqueName: \"kubernetes.io/projected/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-api-access-7x268\") pod \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.491994 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-combined-ca-bundle\") pod \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.492587 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-config-data\") pod \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.492766 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-nova-metadata-tls-certs\") pod \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.492832 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-state-metrics-tls-certs\") pod \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.492909 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-combined-ca-bundle\") pod \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.493023 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54aa018a-3e7e-4c95-9c1d-387543ed5af0-logs\") pod \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.493140 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-state-metrics-tls-config\") pod \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.503585 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54aa018a-3e7e-4c95-9c1d-387543ed5af0-kube-api-access-kz5c2" (OuterVolumeSpecName: "kube-api-access-kz5c2") pod "54aa018a-3e7e-4c95-9c1d-387543ed5af0" (UID: "54aa018a-3e7e-4c95-9c1d-387543ed5af0"). InnerVolumeSpecName "kube-api-access-kz5c2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.505187 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-17c9-account-create-update-6xs6n"] Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.505255 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-17c9-account-create-update-6xs6n"] Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.505272 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.507194 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54aa018a-3e7e-4c95-9c1d-387543ed5af0-logs" (OuterVolumeSpecName: "logs") pod "54aa018a-3e7e-4c95-9c1d-387543ed5af0" (UID: "54aa018a-3e7e-4c95-9c1d-387543ed5af0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.560206 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-config-data" (OuterVolumeSpecName: "config-data") pod "54aa018a-3e7e-4c95-9c1d-387543ed5af0" (UID: "54aa018a-3e7e-4c95-9c1d-387543ed5af0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.562453 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-api-access-7x268" (OuterVolumeSpecName: "kube-api-access-7x268") pod "6b11cfdf-ed7a-48ce-97eb-e03cd6be314c" (UID: "6b11cfdf-ed7a-48ce-97eb-e03cd6be314c"). InnerVolumeSpecName "kube-api-access-7x268". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.576012 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.596026 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz5c2\" (UniqueName: \"kubernetes.io/projected/54aa018a-3e7e-4c95-9c1d-387543ed5af0-kube-api-access-kz5c2\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.596054 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x268\" (UniqueName: \"kubernetes.io/projected/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-api-access-7x268\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.596063 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.596072 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54aa018a-3e7e-4c95-9c1d-387543ed5af0-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.654907 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "6b11cfdf-ed7a-48ce-97eb-e03cd6be314c" (UID: "6b11cfdf-ed7a-48ce-97eb-e03cd6be314c"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.663794 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "54aa018a-3e7e-4c95-9c1d-387543ed5af0" (UID: "54aa018a-3e7e-4c95-9c1d-387543ed5af0"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.679689 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-659598d599-lpzh5"] Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.680887 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b11cfdf-ed7a-48ce-97eb-e03cd6be314c" (UID: "6b11cfdf-ed7a-48ce-97eb-e03cd6be314c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.690110 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-659598d599-lpzh5"] Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.697830 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54aa018a-3e7e-4c95-9c1d-387543ed5af0" (UID: "54aa018a-3e7e-4c95-9c1d-387543ed5af0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.697952 4842 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.697980 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.697989 4842 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.697999 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.709729 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-654fdfd6b6-nrxvh"] Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.722148 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-654fdfd6b6-nrxvh"] Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.737794 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-89ff-account-create-update-fbkfk"] Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.744516 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-89ff-account-create-update-fbkfk"] Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.798983 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "6b11cfdf-ed7a-48ce-97eb-e03cd6be314c" (UID: "6b11cfdf-ed7a-48ce-97eb-e03cd6be314c"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.799169 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-state-metrics-tls-certs\") pod \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " Feb 02 07:09:20 crc kubenswrapper[4842]: W0202 07:09:20.799701 4842 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c/volumes/kubernetes.io~secret/kube-state-metrics-tls-certs Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.799718 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "6b11cfdf-ed7a-48ce-97eb-e03cd6be314c" (UID: "6b11cfdf-ed7a-48ce-97eb-e03cd6be314c"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.900971 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.900999 4842 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.901009 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5vs6\" (UniqueName: \"kubernetes.io/projected/72b63114-a275-4e32-9ad4-9f59e22151b3-kube-api-access-h5vs6\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:20 crc kubenswrapper[4842]: E0202 07:09:20.901085 4842 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 02 07:09:20 crc kubenswrapper[4842]: E0202 07:09:20.901128 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b912e45d-72e7-4250-9757-add1efcfb054-operator-scripts podName:b912e45d-72e7-4250-9757-add1efcfb054 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:21.901114144 +0000 UTC m=+1387.278382056 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b912e45d-72e7-4250-9757-add1efcfb054-operator-scripts") pod "root-account-create-update-kl9p2" (UID: "b912e45d-72e7-4250-9757-add1efcfb054") : configmap "openstack-scripts" not found Feb 02 07:09:20 crc kubenswrapper[4842]: E0202 07:09:20.952575 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b02a597eaa6f312a54cab57cb22a7ba5718d1a52db99c582f4e0031ffecbffc2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 02 07:09:20 crc kubenswrapper[4842]: E0202 07:09:20.956880 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b02a597eaa6f312a54cab57cb22a7ba5718d1a52db99c582f4e0031ffecbffc2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 02 07:09:20 crc kubenswrapper[4842]: E0202 07:09:20.991649 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b02a597eaa6f312a54cab57cb22a7ba5718d1a52db99c582f4e0031ffecbffc2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 02 07:09:20 crc kubenswrapper[4842]: E0202 07:09:20.991710 4842 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="4850512e-bbc8-468d-94ef-1d1be3b0b49c" containerName="nova-cell1-conductor-conductor" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.002145 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db5059ce-9214-449d-a8d5-1b6ab7447e65-operator-scripts\") pod \"keystone-0ec7-account-create-update-9srfz\" (UID: \"db5059ce-9214-449d-a8d5-1b6ab7447e65\") " pod="openstack/keystone-0ec7-account-create-update-9srfz" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.002241 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw8v8\" (UniqueName: \"kubernetes.io/projected/db5059ce-9214-449d-a8d5-1b6ab7447e65-kube-api-access-jw8v8\") pod \"keystone-0ec7-account-create-update-9srfz\" (UID: \"db5059ce-9214-449d-a8d5-1b6ab7447e65\") " pod="openstack/keystone-0ec7-account-create-update-9srfz" Feb 02 07:09:21 crc kubenswrapper[4842]: E0202 07:09:21.002726 4842 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 02 07:09:21 crc kubenswrapper[4842]: E0202 07:09:21.002763 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/db5059ce-9214-449d-a8d5-1b6ab7447e65-operator-scripts podName:db5059ce-9214-449d-a8d5-1b6ab7447e65 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:23.002750992 +0000 UTC m=+1388.380018904 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/db5059ce-9214-449d-a8d5-1b6ab7447e65-operator-scripts") pod "keystone-0ec7-account-create-update-9srfz" (UID: "db5059ce-9214-449d-a8d5-1b6ab7447e65") : configmap "openstack-scripts" not found Feb 02 07:09:21 crc kubenswrapper[4842]: E0202 07:09:21.005482 4842 projected.go:194] Error preparing data for projected volume kube-api-access-jw8v8 for pod openstack/keystone-0ec7-account-create-update-9srfz: failed to fetch token: serviceaccounts "galera-openstack" not found Feb 02 07:09:21 crc kubenswrapper[4842]: E0202 07:09:21.005554 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/db5059ce-9214-449d-a8d5-1b6ab7447e65-kube-api-access-jw8v8 podName:db5059ce-9214-449d-a8d5-1b6ab7447e65 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:23.005531973 +0000 UTC m=+1388.382799885 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-jw8v8" (UniqueName: "kubernetes.io/projected/db5059ce-9214-449d-a8d5-1b6ab7447e65-kube-api-access-jw8v8") pod "keystone-0ec7-account-create-update-9srfz" (UID: "db5059ce-9214-449d-a8d5-1b6ab7447e65") : failed to fetch token: serviceaccounts "galera-openstack" not found Feb 02 07:09:21 crc kubenswrapper[4842]: E0202 07:09:21.146468 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e4d672b_cb7a_406d_ab62_12745f300ef0.slice/crio-95018804c3eeb98d3bc4dd01533eb47f23f9335fb411951096ec1c046e6c00c4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf00b7c2b_79ea_4cd1_80c3_f74f7e398ffd.slice/crio-36bc22b70997be0e1a4613b0f92eaab2935de0d49964ada65b21f18ae7b1478b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod679e6e39_029a_452e_a375_bf0b937e3fbe.slice/crio-conmon-aee85aee5516dd19e05e53144d572bf0aa1bff0b09c36ebb0b91fd8f463420c6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e4d672b_cb7a_406d_ab62_12745f300ef0.slice/crio-conmon-95018804c3eeb98d3bc4dd01533eb47f23f9335fb411951096ec1c046e6c00c4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod174fcd53_40ab_4d19_a317_bc5cd117d2a4.slice/crio-conmon-b1e2b0db828452447ced8622fe6dcff41213b22d66d8c13c96258aefe2a29db1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf00b7c2b_79ea_4cd1_80c3_f74f7e398ffd.slice/crio-conmon-36bc22b70997be0e1a4613b0f92eaab2935de0d49964ada65b21f18ae7b1478b.scope\": RecentStats: unable to find data in memory cache]" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.415838 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2348-account-create-update-j8g5r" event={"ID":"81e3e639-93f4-48d1-8a2f-89e48bcc5f1d","Type":"ContainerDied","Data":"f55c42fda20e7505f223b55e3afbf9284af6c4d7c17fcc411b0d5c1ee7acf9ca"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.415881 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f55c42fda20e7505f223b55e3afbf9284af6c4d7c17fcc411b0d5c1ee7acf9ca" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.417833 4842 generic.go:334] "Generic (PLEG): container finished" podID="eb022115-b53a-4ed0-a2a0-b44644dc26a7" containerID="83c2404b835485135c772ac74f310b1761d22ef1f63c10393be3a87c53fc66aa" exitCode=0 Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.417876 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cc5c967fd-w6ljx" event={"ID":"eb022115-b53a-4ed0-a2a0-b44644dc26a7","Type":"ContainerDied","Data":"83c2404b835485135c772ac74f310b1761d22ef1f63c10393be3a87c53fc66aa"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.417893 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cc5c967fd-w6ljx" event={"ID":"eb022115-b53a-4ed0-a2a0-b44644dc26a7","Type":"ContainerDied","Data":"fd6b7a98a2a46a28710ac379918018f758437a367de16692a4e1403ffd79ebbd"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.417902 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd6b7a98a2a46a28710ac379918018f758437a367de16692a4e1403ffd79ebbd" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.419026 4842 generic.go:334] "Generic (PLEG): container finished" podID="34f55116-a518-4f21-8816-6f8232a6f68d" containerID="72e60f391adc327a7666947b2251ee7da0c5b5a42927991c1ba5e739d160e596" exitCode=0 Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.419060 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"34f55116-a518-4f21-8816-6f8232a6f68d","Type":"ContainerDied","Data":"72e60f391adc327a7666947b2251ee7da0c5b5a42927991c1ba5e739d160e596"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.419074 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"34f55116-a518-4f21-8816-6f8232a6f68d","Type":"ContainerDied","Data":"03d59292614dd942c7945dc3ee9854947498f4230085fae20f5c0d549dbedbf1"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.419083 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03d59292614dd942c7945dc3ee9854947498f4230085fae20f5c0d549dbedbf1" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.420329 4842 generic.go:334] "Generic (PLEG): container finished" podID="6c96a7e1-78c3-449d-9200-735db4ee7086" containerID="50694d5591176c65770672c30837d60f3438d04ee3ca91b5bc53b0366f9835df" exitCode=0 Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.420422 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c96a7e1-78c3-449d-9200-735db4ee7086","Type":"ContainerDied","Data":"50694d5591176c65770672c30837d60f3438d04ee3ca91b5bc53b0366f9835df"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.420449 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c96a7e1-78c3-449d-9200-735db4ee7086","Type":"ContainerDied","Data":"1eecf23079bd634775107b900580aa4bb87379a656bc114e56acf8d85609c009"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.420460 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1eecf23079bd634775107b900580aa4bb87379a656bc114e56acf8d85609c009" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.421298 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8e42-account-create-update-pssf7" event={"ID":"92090cd2-6d30-4aec-81a2-f7d41c40b52d","Type":"ContainerDied","Data":"841933402afec6053b59c1d117b644948866331ecb15d4942a1241af82efdbd6"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.421320 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="841933402afec6053b59c1d117b644948866331ecb15d4942a1241af82efdbd6" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.422425 4842 generic.go:334] "Generic (PLEG): container finished" podID="2e4d672b-cb7a-406d-ab62-12745f300ef0" containerID="95018804c3eeb98d3bc4dd01533eb47f23f9335fb411951096ec1c046e6c00c4" exitCode=0 Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.422470 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"2e4d672b-cb7a-406d-ab62-12745f300ef0","Type":"ContainerDied","Data":"95018804c3eeb98d3bc4dd01533eb47f23f9335fb411951096ec1c046e6c00c4"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.422508 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"2e4d672b-cb7a-406d-ab62-12745f300ef0","Type":"ContainerDied","Data":"ccad06562fb6f40d062777e6d3a6e4d9830ae7a447085c52c329d40fd37ced11"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.422519 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccad06562fb6f40d062777e6d3a6e4d9830ae7a447085c52c329d40fd37ced11" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.423489 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85ce-account-create-update-szhp5" event={"ID":"79d5e0a1-8df4-4db1-aaf8-0d253163a522","Type":"ContainerDied","Data":"92c5616de7100c6457ed5b0dcd602dadf7228bf9da3a33c8035d364e9130e12d"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.423519 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92c5616de7100c6457ed5b0dcd602dadf7228bf9da3a33c8035d364e9130e12d" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.424468 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bfdd-account-create-update-z7blt" event={"ID":"90821e80-1367-4cf6-8087-fb83507223ec","Type":"ContainerDied","Data":"6cb3fd3a05582a17982ba597c392cf5f579dd70cea15a2dd1fd0c7422d60a078"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.424490 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cb3fd3a05582a17982ba597c392cf5f579dd70cea15a2dd1fd0c7422d60a078" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.425759 4842 generic.go:334] "Generic (PLEG): container finished" podID="c56025ce-3772-435d-bdba-a4d1ba9d6e2f" containerID="c1cc1b81874f37b6dd69a794f4c89e58f1e938624f539804095c18ceb3989c67" exitCode=0 Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.425801 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b5c67fdbd-zsx96" event={"ID":"c56025ce-3772-435d-bdba-a4d1ba9d6e2f","Type":"ContainerDied","Data":"c1cc1b81874f37b6dd69a794f4c89e58f1e938624f539804095c18ceb3989c67"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.425818 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b5c67fdbd-zsx96" event={"ID":"c56025ce-3772-435d-bdba-a4d1ba9d6e2f","Type":"ContainerDied","Data":"33a7212242745098719539d77d7d2ab10cc0d6841f34ba8ac2dabc8a942c26b5"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.425827 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33a7212242745098719539d77d7d2ab10cc0d6841f34ba8ac2dabc8a942c26b5" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.427441 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"25609b1c-e1e9-4633-b3e3-93bd2f4396de","Type":"ContainerDied","Data":"22718259310cd947182a28b08951d593ee087b709a27af6ee23d9b940e93c5ac"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.427467 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22718259310cd947182a28b08951d593ee087b709a27af6ee23d9b940e93c5ac" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.429289 4842 generic.go:334] "Generic (PLEG): container finished" podID="b912e45d-72e7-4250-9757-add1efcfb054" containerID="13000d6307279a8f1879b7fd7be84a407943a9cc3066fff0cf9a626a1678f240" exitCode=1 Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.429349 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kl9p2" event={"ID":"b912e45d-72e7-4250-9757-add1efcfb054","Type":"ContainerDied","Data":"13000d6307279a8f1879b7fd7be84a407943a9cc3066fff0cf9a626a1678f240"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.440263 4842 generic.go:334] "Generic (PLEG): container finished" podID="f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd" containerID="36bc22b70997be0e1a4613b0f92eaab2935de0d49964ada65b21f18ae7b1478b" exitCode=0 Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.442273 4842 generic.go:334] "Generic (PLEG): container finished" podID="679e6e39-029a-452e-a375-bf0b937e3fbe" containerID="aee85aee5516dd19e05e53144d572bf0aa1bff0b09c36ebb0b91fd8f463420c6" exitCode=0 Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.447018 4842 generic.go:334] "Generic (PLEG): container finished" podID="1f94c60e-a4fc-4b7d-96cd-367d46a731c4" containerID="aa3abfa94e116973782248416ac6de3799758150d193f7dbb95e6a13e34381cc" exitCode=0 Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.448705 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a6e38b7-4a6d-4d93-af3d-5abac4efc44d" path="/var/lib/kubelet/pods/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d/volumes" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.449262 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4450e400-557b-4092-8f73-124910137dc4" path="/var/lib/kubelet/pods/4450e400-557b-4092-8f73-124910137dc4/volumes" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.449762 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5130c998-8bfd-413c-887e-2100da96f6ce" path="/var/lib/kubelet/pods/5130c998-8bfd-413c-887e-2100da96f6ce/volumes" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.450090 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72b63114-a275-4e32-9ad4-9f59e22151b3" path="/var/lib/kubelet/pods/72b63114-a275-4e32-9ad4-9f59e22151b3/volumes" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.451423 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88d00cbf-6e28-4be5-abc2-6c77e76de81e" path="/var/lib/kubelet/pods/88d00cbf-6e28-4be5-abc2-6c77e76de81e/volumes" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.452994 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dad4bc1-b1ae-436c-925e-986d33b77e51" path="/var/lib/kubelet/pods/8dad4bc1-b1ae-436c-925e-986d33b77e51/volumes" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.453181 4842 generic.go:334] "Generic (PLEG): container finished" podID="4850512e-bbc8-468d-94ef-1d1be3b0b49c" containerID="b02a597eaa6f312a54cab57cb22a7ba5718d1a52db99c582f4e0031ffecbffc2" exitCode=0 Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.468630 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="900b2d20-01c8-47e0-8271-ccfd8549d468" path="/var/lib/kubelet/pods/900b2d20-01c8-47e0-8271-ccfd8549d468/volumes" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.469793 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.470287 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eff2351-b4e8-43cf-a232-9c36cb11c130" path="/var/lib/kubelet/pods/9eff2351-b4e8-43cf-a232-9c36cb11c130/volumes" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.472077 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bed4dadb-b854-4082-b18a-67f58543bb9a" path="/var/lib/kubelet/pods/bed4dadb-b854-4082-b18a-67f58543bb9a/volumes" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.473145 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-716d-account-create-update-x4f2v" event={"ID":"e91519e6-bf55-4c08-8274-1d8a59f1ff52","Type":"ContainerDied","Data":"16450eee390031a65a59938215b79e0eab96c41ea0a94add55f20f842e142b6e"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.473177 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16450eee390031a65a59938215b79e0eab96c41ea0a94add55f20f842e142b6e" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.473188 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-57cc9f4749-jxzrq" event={"ID":"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd","Type":"ContainerDied","Data":"36bc22b70997be0e1a4613b0f92eaab2935de0d49964ada65b21f18ae7b1478b"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.473210 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" event={"ID":"679e6e39-029a-452e-a375-bf0b937e3fbe","Type":"ContainerDied","Data":"aee85aee5516dd19e05e53144d572bf0aa1bff0b09c36ebb0b91fd8f463420c6"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.473259 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1f94c60e-a4fc-4b7d-96cd-367d46a731c4","Type":"ContainerDied","Data":"aa3abfa94e116973782248416ac6de3799758150d193f7dbb95e6a13e34381cc"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.473274 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4850512e-bbc8-468d-94ef-1d1be3b0b49c","Type":"ContainerDied","Data":"b02a597eaa6f312a54cab57cb22a7ba5718d1a52db99c582f4e0031ffecbffc2"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.473288 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c","Type":"ContainerDied","Data":"c5471f47cbc6e33e200626c1c2261b0fedfaae9cf67bbd6b8d7f8382239e8d5f"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.486834 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.501253 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8e42-account-create-update-pssf7" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.515808 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85ce-account-create-update-szhp5" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.517428 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-716d-account-create-update-x4f2v" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.522245 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2348-account-create-update-j8g5r" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.524957 4842 scope.go:117] "RemoveContainer" containerID="1e413e67564e718a498ac35eeced53092dbd9372163eaf63c69cfa47632f99ec" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.534890 4842 generic.go:334] "Generic (PLEG): container finished" podID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerID="b1e2b0db828452447ced8622fe6dcff41213b22d66d8c13c96258aefe2a29db1" exitCode=0 Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.534983 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"174fcd53-40ab-4d19-a317-bc5cd117d2a4","Type":"ContainerDied","Data":"b1e2b0db828452447ced8622fe6dcff41213b22d66d8c13c96258aefe2a29db1"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.537971 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0ec7-account-create-update-9srfz" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.538245 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.539570 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"54aa018a-3e7e-4c95-9c1d-387543ed5af0","Type":"ContainerDied","Data":"97d85497136bca54efa2ce8c8d3033b9016ab0e739dcabcdf04a8ad306a7c1b7"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.568050 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bfdd-account-create-update-z7blt" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.598125 4842 scope.go:117] "RemoveContainer" containerID="9926781ae9dc15022af00f978a6d8014ea831a07a27df31142281c3ba8914507" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.609441 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.610530 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.616168 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.626423 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.635328 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.641393 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.644566 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.661406 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92090cd2-6d30-4aec-81a2-f7d41c40b52d-operator-scripts\") pod \"92090cd2-6d30-4aec-81a2-f7d41c40b52d\" (UID: \"92090cd2-6d30-4aec-81a2-f7d41c40b52d\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.661482 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9wrf\" (UniqueName: \"kubernetes.io/projected/81e3e639-93f4-48d1-8a2f-89e48bcc5f1d-kube-api-access-c9wrf\") pod \"81e3e639-93f4-48d1-8a2f-89e48bcc5f1d\" (UID: \"81e3e639-93f4-48d1-8a2f-89e48bcc5f1d\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.661566 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-config-data\") pod \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.661611 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25609b1c-e1e9-4633-b3e3-93bd2f4396de-logs\") pod \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.661632 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79d5e0a1-8df4-4db1-aaf8-0d253163a522-operator-scripts\") pod \"79d5e0a1-8df4-4db1-aaf8-0d253163a522\" (UID: \"79d5e0a1-8df4-4db1-aaf8-0d253163a522\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.661665 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cg6x\" (UniqueName: \"kubernetes.io/projected/92090cd2-6d30-4aec-81a2-f7d41c40b52d-kube-api-access-8cg6x\") pod \"92090cd2-6d30-4aec-81a2-f7d41c40b52d\" (UID: \"92090cd2-6d30-4aec-81a2-f7d41c40b52d\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.661701 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-combined-ca-bundle\") pod \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.661730 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc9ng\" (UniqueName: \"kubernetes.io/projected/79d5e0a1-8df4-4db1-aaf8-0d253163a522-kube-api-access-rc9ng\") pod \"79d5e0a1-8df4-4db1-aaf8-0d253163a522\" (UID: \"79d5e0a1-8df4-4db1-aaf8-0d253163a522\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.661768 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81e3e639-93f4-48d1-8a2f-89e48bcc5f1d-operator-scripts\") pod \"81e3e639-93f4-48d1-8a2f-89e48bcc5f1d\" (UID: \"81e3e639-93f4-48d1-8a2f-89e48bcc5f1d\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.661788 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9mmn\" (UniqueName: \"kubernetes.io/projected/e91519e6-bf55-4c08-8274-1d8a59f1ff52-kube-api-access-q9mmn\") pod \"e91519e6-bf55-4c08-8274-1d8a59f1ff52\" (UID: \"e91519e6-bf55-4c08-8274-1d8a59f1ff52\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.661817 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nh8lx\" (UniqueName: \"kubernetes.io/projected/25609b1c-e1e9-4633-b3e3-93bd2f4396de-kube-api-access-nh8lx\") pod \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.661850 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-public-tls-certs\") pod \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.661881 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-internal-tls-certs\") pod \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.661911 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e91519e6-bf55-4c08-8274-1d8a59f1ff52-operator-scripts\") pod \"e91519e6-bf55-4c08-8274-1d8a59f1ff52\" (UID: \"e91519e6-bf55-4c08-8274-1d8a59f1ff52\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.662736 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e91519e6-bf55-4c08-8274-1d8a59f1ff52-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e91519e6-bf55-4c08-8274-1d8a59f1ff52" (UID: "e91519e6-bf55-4c08-8274-1d8a59f1ff52"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.663262 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92090cd2-6d30-4aec-81a2-f7d41c40b52d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "92090cd2-6d30-4aec-81a2-f7d41c40b52d" (UID: "92090cd2-6d30-4aec-81a2-f7d41c40b52d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.667753 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79d5e0a1-8df4-4db1-aaf8-0d253163a522-kube-api-access-rc9ng" (OuterVolumeSpecName: "kube-api-access-rc9ng") pod "79d5e0a1-8df4-4db1-aaf8-0d253163a522" (UID: "79d5e0a1-8df4-4db1-aaf8-0d253163a522"). InnerVolumeSpecName "kube-api-access-rc9ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.667993 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e3e639-93f4-48d1-8a2f-89e48bcc5f1d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81e3e639-93f4-48d1-8a2f-89e48bcc5f1d" (UID: "81e3e639-93f4-48d1-8a2f-89e48bcc5f1d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.672572 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79d5e0a1-8df4-4db1-aaf8-0d253163a522-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "79d5e0a1-8df4-4db1-aaf8-0d253163a522" (UID: "79d5e0a1-8df4-4db1-aaf8-0d253163a522"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.675276 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25609b1c-e1e9-4633-b3e3-93bd2f4396de-logs" (OuterVolumeSpecName: "logs") pod "25609b1c-e1e9-4633-b3e3-93bd2f4396de" (UID: "25609b1c-e1e9-4633-b3e3-93bd2f4396de"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.675711 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e91519e6-bf55-4c08-8274-1d8a59f1ff52-kube-api-access-q9mmn" (OuterVolumeSpecName: "kube-api-access-q9mmn") pod "e91519e6-bf55-4c08-8274-1d8a59f1ff52" (UID: "e91519e6-bf55-4c08-8274-1d8a59f1ff52"). InnerVolumeSpecName "kube-api-access-q9mmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.680887 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25609b1c-e1e9-4633-b3e3-93bd2f4396de-kube-api-access-nh8lx" (OuterVolumeSpecName: "kube-api-access-nh8lx") pod "25609b1c-e1e9-4633-b3e3-93bd2f4396de" (UID: "25609b1c-e1e9-4633-b3e3-93bd2f4396de"). InnerVolumeSpecName "kube-api-access-nh8lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.687250 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.695692 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92090cd2-6d30-4aec-81a2-f7d41c40b52d-kube-api-access-8cg6x" (OuterVolumeSpecName: "kube-api-access-8cg6x") pod "92090cd2-6d30-4aec-81a2-f7d41c40b52d" (UID: "92090cd2-6d30-4aec-81a2-f7d41c40b52d"). InnerVolumeSpecName "kube-api-access-8cg6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.703837 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-config-data" (OuterVolumeSpecName: "config-data") pod "25609b1c-e1e9-4633-b3e3-93bd2f4396de" (UID: "25609b1c-e1e9-4633-b3e3-93bd2f4396de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.704070 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25609b1c-e1e9-4633-b3e3-93bd2f4396de" (UID: "25609b1c-e1e9-4633-b3e3-93bd2f4396de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.715416 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e3e639-93f4-48d1-8a2f-89e48bcc5f1d-kube-api-access-c9wrf" (OuterVolumeSpecName: "kube-api-access-c9wrf") pod "81e3e639-93f4-48d1-8a2f-89e48bcc5f1d" (UID: "81e3e639-93f4-48d1-8a2f-89e48bcc5f1d"). InnerVolumeSpecName "kube-api-access-c9wrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.715507 4842 scope.go:117] "RemoveContainer" containerID="75aec13501e8ac4a78490209fc3281c84b435ac2ebcc48667746bb6eb38e36e9" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.724103 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 07:09:21 crc kubenswrapper[4842]: E0202 07:09:21.726055 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa3abfa94e116973782248416ac6de3799758150d193f7dbb95e6a13e34381cc is running failed: container process not found" containerID="aa3abfa94e116973782248416ac6de3799758150d193f7dbb95e6a13e34381cc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 07:09:21 crc kubenswrapper[4842]: E0202 07:09:21.728141 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa3abfa94e116973782248416ac6de3799758150d193f7dbb95e6a13e34381cc is running failed: container process not found" containerID="aa3abfa94e116973782248416ac6de3799758150d193f7dbb95e6a13e34381cc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 07:09:21 crc kubenswrapper[4842]: E0202 07:09:21.731767 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa3abfa94e116973782248416ac6de3799758150d193f7dbb95e6a13e34381cc is running failed: container process not found" containerID="aa3abfa94e116973782248416ac6de3799758150d193f7dbb95e6a13e34381cc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 07:09:21 crc kubenswrapper[4842]: E0202 07:09:21.731814 4842 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa3abfa94e116973782248416ac6de3799758150d193f7dbb95e6a13e34381cc is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="1f94c60e-a4fc-4b7d-96cd-367d46a731c4" containerName="nova-scheduler-scheduler" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.749086 4842 scope.go:117] "RemoveContainer" containerID="c6b2aef7c5907fec1f821bb206e985dfa1c10ebd9ed998f2f05ec13c6cf132ab" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.763782 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-config-data\") pod \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.763826 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-public-tls-certs\") pod \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.763864 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb022115-b53a-4ed0-a2a0-b44644dc26a7-logs\") pod \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.763884 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkmc9\" (UniqueName: \"kubernetes.io/projected/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-kube-api-access-rkmc9\") pod \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.763910 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-combined-ca-bundle\") pod \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.763931 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-internal-tls-certs\") pod \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.763956 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e4d672b-cb7a-406d-ab62-12745f300ef0-config-data\") pod \"2e4d672b-cb7a-406d-ab62-12745f300ef0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.763972 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-combined-ca-bundle\") pod \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.763989 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34f55116-a518-4f21-8816-6f8232a6f68d-logs\") pod \"34f55116-a518-4f21-8816-6f8232a6f68d\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764005 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-scripts\") pod \"6c96a7e1-78c3-449d-9200-735db4ee7086\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764024 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-config-data\") pod \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764046 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-config-data-custom\") pod \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764077 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-config-data\") pod \"34f55116-a518-4f21-8816-6f8232a6f68d\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764105 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-internal-tls-certs\") pod \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764134 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c96a7e1-78c3-449d-9200-735db4ee7086-httpd-run\") pod \"6c96a7e1-78c3-449d-9200-735db4ee7086\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764160 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-internal-tls-certs\") pod \"6c96a7e1-78c3-449d-9200-735db4ee7086\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764177 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5nxt\" (UniqueName: \"kubernetes.io/projected/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-kube-api-access-d5nxt\") pod \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764195 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4d672b-cb7a-406d-ab62-12745f300ef0-combined-ca-bundle\") pod \"2e4d672b-cb7a-406d-ab62-12745f300ef0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764227 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-config-data\") pod \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764245 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-logs\") pod \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764270 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-config-data-custom\") pod \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764303 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-scripts\") pod \"34f55116-a518-4f21-8816-6f8232a6f68d\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764318 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34f55116-a518-4f21-8816-6f8232a6f68d-httpd-run\") pod \"34f55116-a518-4f21-8816-6f8232a6f68d\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764334 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5svcs\" (UniqueName: \"kubernetes.io/projected/90821e80-1367-4cf6-8087-fb83507223ec-kube-api-access-5svcs\") pod \"90821e80-1367-4cf6-8087-fb83507223ec\" (UID: \"90821e80-1367-4cf6-8087-fb83507223ec\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764401 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-public-tls-certs\") pod \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764438 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c96a7e1-78c3-449d-9200-735db4ee7086-logs\") pod \"6c96a7e1-78c3-449d-9200-735db4ee7086\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764455 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2e4d672b-cb7a-406d-ab62-12745f300ef0-kolla-config\") pod \"2e4d672b-cb7a-406d-ab62-12745f300ef0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.767135 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-logs" (OuterVolumeSpecName: "logs") pod "f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd" (UID: "f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.767481 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zscmk\" (UniqueName: \"kubernetes.io/projected/eb022115-b53a-4ed0-a2a0-b44644dc26a7-kube-api-access-zscmk\") pod \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.767508 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"34f55116-a518-4f21-8816-6f8232a6f68d\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.767535 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-scripts\") pod \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.767552 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e4d672b-cb7a-406d-ab62-12745f300ef0-memcached-tls-certs\") pod \"2e4d672b-cb7a-406d-ab62-12745f300ef0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.767571 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-logs\") pod \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.767668 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-combined-ca-bundle\") pod \"6c96a7e1-78c3-449d-9200-735db4ee7086\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.767712 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-combined-ca-bundle\") pod \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.767739 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngbgx\" (UniqueName: \"kubernetes.io/projected/2e4d672b-cb7a-406d-ab62-12745f300ef0-kube-api-access-ngbgx\") pod \"2e4d672b-cb7a-406d-ab62-12745f300ef0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.767759 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9pr5\" (UniqueName: \"kubernetes.io/projected/34f55116-a518-4f21-8816-6f8232a6f68d-kube-api-access-r9pr5\") pod \"34f55116-a518-4f21-8816-6f8232a6f68d\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.767777 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-combined-ca-bundle\") pod \"34f55116-a518-4f21-8816-6f8232a6f68d\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.767795 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"6c96a7e1-78c3-449d-9200-735db4ee7086\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.767813 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90821e80-1367-4cf6-8087-fb83507223ec-operator-scripts\") pod \"90821e80-1367-4cf6-8087-fb83507223ec\" (UID: \"90821e80-1367-4cf6-8087-fb83507223ec\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.767834 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rq6l\" (UniqueName: \"kubernetes.io/projected/6c96a7e1-78c3-449d-9200-735db4ee7086-kube-api-access-9rq6l\") pod \"6c96a7e1-78c3-449d-9200-735db4ee7086\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.767860 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-public-tls-certs\") pod \"34f55116-a518-4f21-8816-6f8232a6f68d\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.767875 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-config-data\") pod \"6c96a7e1-78c3-449d-9200-735db4ee7086\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.768394 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9wrf\" (UniqueName: \"kubernetes.io/projected/81e3e639-93f4-48d1-8a2f-89e48bcc5f1d-kube-api-access-c9wrf\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.768408 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.768417 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25609b1c-e1e9-4633-b3e3-93bd2f4396de-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.768426 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79d5e0a1-8df4-4db1-aaf8-0d253163a522-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.768435 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cg6x\" (UniqueName: \"kubernetes.io/projected/92090cd2-6d30-4aec-81a2-f7d41c40b52d-kube-api-access-8cg6x\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.768444 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.768453 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rc9ng\" (UniqueName: \"kubernetes.io/projected/79d5e0a1-8df4-4db1-aaf8-0d253163a522-kube-api-access-rc9ng\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.768462 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81e3e639-93f4-48d1-8a2f-89e48bcc5f1d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.768470 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9mmn\" (UniqueName: \"kubernetes.io/projected/e91519e6-bf55-4c08-8274-1d8a59f1ff52-kube-api-access-q9mmn\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.768478 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.768487 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nh8lx\" (UniqueName: \"kubernetes.io/projected/25609b1c-e1e9-4633-b3e3-93bd2f4396de-kube-api-access-nh8lx\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.768495 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e91519e6-bf55-4c08-8274-1d8a59f1ff52-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.768504 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92090cd2-6d30-4aec-81a2-f7d41c40b52d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.774660 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c96a7e1-78c3-449d-9200-735db4ee7086-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6c96a7e1-78c3-449d-9200-735db4ee7086" (UID: "6c96a7e1-78c3-449d-9200-735db4ee7086"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.781287 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd" (UID: "f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.782111 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb022115-b53a-4ed0-a2a0-b44644dc26a7-logs" (OuterVolumeSpecName: "logs") pod "eb022115-b53a-4ed0-a2a0-b44644dc26a7" (UID: "eb022115-b53a-4ed0-a2a0-b44644dc26a7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.784768 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-logs" (OuterVolumeSpecName: "logs") pod "c56025ce-3772-435d-bdba-a4d1ba9d6e2f" (UID: "c56025ce-3772-435d-bdba-a4d1ba9d6e2f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.787839 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-scripts" (OuterVolumeSpecName: "scripts") pod "34f55116-a518-4f21-8816-6f8232a6f68d" (UID: "34f55116-a518-4f21-8816-6f8232a6f68d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.788342 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34f55116-a518-4f21-8816-6f8232a6f68d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "34f55116-a518-4f21-8816-6f8232a6f68d" (UID: "34f55116-a518-4f21-8816-6f8232a6f68d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.788420 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-kube-api-access-d5nxt" (OuterVolumeSpecName: "kube-api-access-d5nxt") pod "c56025ce-3772-435d-bdba-a4d1ba9d6e2f" (UID: "c56025ce-3772-435d-bdba-a4d1ba9d6e2f"). InnerVolumeSpecName "kube-api-access-d5nxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.788588 4842 scope.go:117] "RemoveContainer" containerID="415d21f9580ea68e52aa649eacebbe3550d2da28410a54eb695a4a912d91fbdd" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.789913 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90821e80-1367-4cf6-8087-fb83507223ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "90821e80-1367-4cf6-8087-fb83507223ec" (UID: "90821e80-1367-4cf6-8087-fb83507223ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.789959 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e4d672b-cb7a-406d-ab62-12745f300ef0-config-data" (OuterVolumeSpecName: "config-data") pod "2e4d672b-cb7a-406d-ab62-12745f300ef0" (UID: "2e4d672b-cb7a-406d-ab62-12745f300ef0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.790727 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34f55116-a518-4f21-8816-6f8232a6f68d-logs" (OuterVolumeSpecName: "logs") pod "34f55116-a518-4f21-8816-6f8232a6f68d" (UID: "34f55116-a518-4f21-8816-6f8232a6f68d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.794667 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "25609b1c-e1e9-4633-b3e3-93bd2f4396de" (UID: "25609b1c-e1e9-4633-b3e3-93bd2f4396de"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.794962 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e4d672b-cb7a-406d-ab62-12745f300ef0-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "2e4d672b-cb7a-406d-ab62-12745f300ef0" (UID: "2e4d672b-cb7a-406d-ab62-12745f300ef0"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.802613 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-0ec7-account-create-update-9srfz"] Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.807437 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c96a7e1-78c3-449d-9200-735db4ee7086-logs" (OuterVolumeSpecName: "logs") pod "6c96a7e1-78c3-449d-9200-735db4ee7086" (UID: "6c96a7e1-78c3-449d-9200-735db4ee7086"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.813055 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90821e80-1367-4cf6-8087-fb83507223ec-kube-api-access-5svcs" (OuterVolumeSpecName: "kube-api-access-5svcs") pod "90821e80-1367-4cf6-8087-fb83507223ec" (UID: "90821e80-1367-4cf6-8087-fb83507223ec"). InnerVolumeSpecName "kube-api-access-5svcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.813943 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-scripts" (OuterVolumeSpecName: "scripts") pod "6c96a7e1-78c3-449d-9200-735db4ee7086" (UID: "6c96a7e1-78c3-449d-9200-735db4ee7086"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.814058 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c96a7e1-78c3-449d-9200-735db4ee7086-kube-api-access-9rq6l" (OuterVolumeSpecName: "kube-api-access-9rq6l") pod "6c96a7e1-78c3-449d-9200-735db4ee7086" (UID: "6c96a7e1-78c3-449d-9200-735db4ee7086"). InnerVolumeSpecName "kube-api-access-9rq6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.814650 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e4d672b-cb7a-406d-ab62-12745f300ef0-kube-api-access-ngbgx" (OuterVolumeSpecName: "kube-api-access-ngbgx") pod "2e4d672b-cb7a-406d-ab62-12745f300ef0" (UID: "2e4d672b-cb7a-406d-ab62-12745f300ef0"). InnerVolumeSpecName "kube-api-access-ngbgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.814717 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-kube-api-access-rkmc9" (OuterVolumeSpecName: "kube-api-access-rkmc9") pod "f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd" (UID: "f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd"). InnerVolumeSpecName "kube-api-access-rkmc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.814752 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34f55116-a518-4f21-8816-6f8232a6f68d-kube-api-access-r9pr5" (OuterVolumeSpecName: "kube-api-access-r9pr5") pod "34f55116-a518-4f21-8816-6f8232a6f68d" (UID: "34f55116-a518-4f21-8816-6f8232a6f68d"). InnerVolumeSpecName "kube-api-access-r9pr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.817878 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-scripts" (OuterVolumeSpecName: "scripts") pod "c56025ce-3772-435d-bdba-a4d1ba9d6e2f" (UID: "c56025ce-3772-435d-bdba-a4d1ba9d6e2f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.818543 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "eb022115-b53a-4ed0-a2a0-b44644dc26a7" (UID: "eb022115-b53a-4ed0-a2a0-b44644dc26a7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.819618 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "34f55116-a518-4f21-8816-6f8232a6f68d" (UID: "34f55116-a518-4f21-8816-6f8232a6f68d"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.820683 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "6c96a7e1-78c3-449d-9200-735db4ee7086" (UID: "6c96a7e1-78c3-449d-9200-735db4ee7086"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.825873 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-0ec7-account-create-update-9srfz"] Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.837515 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.838078 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.848507 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb022115-b53a-4ed0-a2a0-b44644dc26a7-kube-api-access-zscmk" (OuterVolumeSpecName: "kube-api-access-zscmk") pod "eb022115-b53a-4ed0-a2a0-b44644dc26a7" (UID: "eb022115-b53a-4ed0-a2a0-b44644dc26a7"). InnerVolumeSpecName "kube-api-access-zscmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.854046 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.869359 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "25609b1c-e1e9-4633-b3e3-93bd2f4396de" (UID: "25609b1c-e1e9-4633-b3e3-93bd2f4396de"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.869700 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/174fcd53-40ab-4d19-a317-bc5cd117d2a4-run-httpd\") pod \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.870092 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-internal-tls-certs\") pod \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.870368 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-combined-ca-bundle\") pod \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.870462 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-sg-core-conf-yaml\") pod \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.871382 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/174fcd53-40ab-4d19-a317-bc5cd117d2a4-log-httpd\") pod \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.871479 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-scripts\") pod \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.872168 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4btlq\" (UniqueName: \"kubernetes.io/projected/174fcd53-40ab-4d19-a317-bc5cd117d2a4-kube-api-access-4btlq\") pod \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.872383 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-config-data\") pod \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.872480 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-ceilometer-tls-certs\") pod \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.873032 4842 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c96a7e1-78c3-449d-9200-735db4ee7086-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.873488 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5nxt\" (UniqueName: \"kubernetes.io/projected/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-kube-api-access-d5nxt\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.873557 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.873621 4842 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.873686 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.873738 4842 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34f55116-a518-4f21-8816-6f8232a6f68d-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.873787 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5svcs\" (UniqueName: \"kubernetes.io/projected/90821e80-1367-4cf6-8087-fb83507223ec-kube-api-access-5svcs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.873835 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c96a7e1-78c3-449d-9200-735db4ee7086-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.873910 4842 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2e4d672b-cb7a-406d-ab62-12745f300ef0-kolla-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.873981 4842 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.874033 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zscmk\" (UniqueName: \"kubernetes.io/projected/eb022115-b53a-4ed0-a2a0-b44644dc26a7-kube-api-access-zscmk\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.874082 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.874133 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.874182 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngbgx\" (UniqueName: \"kubernetes.io/projected/2e4d672b-cb7a-406d-ab62-12745f300ef0-kube-api-access-ngbgx\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.874258 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9pr5\" (UniqueName: \"kubernetes.io/projected/34f55116-a518-4f21-8816-6f8232a6f68d-kube-api-access-r9pr5\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.874333 4842 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.874392 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90821e80-1367-4cf6-8087-fb83507223ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.874476 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rq6l\" (UniqueName: \"kubernetes.io/projected/6c96a7e1-78c3-449d-9200-735db4ee7086-kube-api-access-9rq6l\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.874530 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkmc9\" (UniqueName: \"kubernetes.io/projected/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-kube-api-access-rkmc9\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.874580 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb022115-b53a-4ed0-a2a0-b44644dc26a7-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.874638 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e4d672b-cb7a-406d-ab62-12745f300ef0-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.874688 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34f55116-a518-4f21-8816-6f8232a6f68d-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.874743 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.874795 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.871955 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.870038 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/174fcd53-40ab-4d19-a317-bc5cd117d2a4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "174fcd53-40ab-4d19-a317-bc5cd117d2a4" (UID: "174fcd53-40ab-4d19-a317-bc5cd117d2a4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: W0202 07:09:21.870206 4842 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/25609b1c-e1e9-4633-b3e3-93bd2f4396de/volumes/kubernetes.io~secret/internal-tls-certs Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.876634 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "25609b1c-e1e9-4633-b3e3-93bd2f4396de" (UID: "25609b1c-e1e9-4633-b3e3-93bd2f4396de"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.873228 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/174fcd53-40ab-4d19-a317-bc5cd117d2a4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "174fcd53-40ab-4d19-a317-bc5cd117d2a4" (UID: "174fcd53-40ab-4d19-a317-bc5cd117d2a4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.884862 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.907912 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-scripts" (OuterVolumeSpecName: "scripts") pod "174fcd53-40ab-4d19-a317-bc5cd117d2a4" (UID: "174fcd53-40ab-4d19-a317-bc5cd117d2a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.919622 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e4d672b-cb7a-406d-ab62-12745f300ef0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e4d672b-cb7a-406d-ab62-12745f300ef0" (UID: "2e4d672b-cb7a-406d-ab62-12745f300ef0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.928615 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/174fcd53-40ab-4d19-a317-bc5cd117d2a4-kube-api-access-4btlq" (OuterVolumeSpecName: "kube-api-access-4btlq") pod "174fcd53-40ab-4d19-a317-bc5cd117d2a4" (UID: "174fcd53-40ab-4d19-a317-bc5cd117d2a4"). InnerVolumeSpecName "kube-api-access-4btlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.939764 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34f55116-a518-4f21-8816-6f8232a6f68d" (UID: "34f55116-a518-4f21-8816-6f8232a6f68d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.958515 4842 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.964777 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb022115-b53a-4ed0-a2a0-b44644dc26a7" (UID: "eb022115-b53a-4ed0-a2a0-b44644dc26a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.975634 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4850512e-bbc8-468d-94ef-1d1be3b0b49c-config-data\") pod \"4850512e-bbc8-468d-94ef-1d1be3b0b49c\" (UID: \"4850512e-bbc8-468d-94ef-1d1be3b0b49c\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.975750 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/679e6e39-029a-452e-a375-bf0b937e3fbe-logs\") pod \"679e6e39-029a-452e-a375-bf0b937e3fbe\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.975789 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k69gq\" (UniqueName: \"kubernetes.io/projected/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-kube-api-access-k69gq\") pod \"1f94c60e-a4fc-4b7d-96cd-367d46a731c4\" (UID: \"1f94c60e-a4fc-4b7d-96cd-367d46a731c4\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.975839 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4850512e-bbc8-468d-94ef-1d1be3b0b49c-combined-ca-bundle\") pod \"4850512e-bbc8-468d-94ef-1d1be3b0b49c\" (UID: \"4850512e-bbc8-468d-94ef-1d1be3b0b49c\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.975865 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-config-data-custom\") pod \"679e6e39-029a-452e-a375-bf0b937e3fbe\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.976141 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/679e6e39-029a-452e-a375-bf0b937e3fbe-logs" (OuterVolumeSpecName: "logs") pod "679e6e39-029a-452e-a375-bf0b937e3fbe" (UID: "679e6e39-029a-452e-a375-bf0b937e3fbe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.976308 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-config-data\") pod \"679e6e39-029a-452e-a375-bf0b937e3fbe\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.976359 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lfws\" (UniqueName: \"kubernetes.io/projected/679e6e39-029a-452e-a375-bf0b937e3fbe-kube-api-access-9lfws\") pod \"679e6e39-029a-452e-a375-bf0b937e3fbe\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.976466 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-combined-ca-bundle\") pod \"679e6e39-029a-452e-a375-bf0b937e3fbe\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.976506 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58tm7\" (UniqueName: \"kubernetes.io/projected/4850512e-bbc8-468d-94ef-1d1be3b0b49c-kube-api-access-58tm7\") pod \"4850512e-bbc8-468d-94ef-1d1be3b0b49c\" (UID: \"4850512e-bbc8-468d-94ef-1d1be3b0b49c\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.976542 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-config-data\") pod \"1f94c60e-a4fc-4b7d-96cd-367d46a731c4\" (UID: \"1f94c60e-a4fc-4b7d-96cd-367d46a731c4\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.976562 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-combined-ca-bundle\") pod \"1f94c60e-a4fc-4b7d-96cd-367d46a731c4\" (UID: \"1f94c60e-a4fc-4b7d-96cd-367d46a731c4\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.976942 4842 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.976956 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.976966 4842 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/174fcd53-40ab-4d19-a317-bc5cd117d2a4-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.976975 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.976984 4842 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.976993 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.977005 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db5059ce-9214-449d-a8d5-1b6ab7447e65-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.977017 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4btlq\" (UniqueName: \"kubernetes.io/projected/174fcd53-40ab-4d19-a317-bc5cd117d2a4-kube-api-access-4btlq\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.977027 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/679e6e39-029a-452e-a375-bf0b937e3fbe-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.977035 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw8v8\" (UniqueName: \"kubernetes.io/projected/db5059ce-9214-449d-a8d5-1b6ab7447e65-kube-api-access-jw8v8\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.977044 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4d672b-cb7a-406d-ab62-12745f300ef0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.977052 4842 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/174fcd53-40ab-4d19-a317-bc5cd117d2a4-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: E0202 07:09:21.977106 4842 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 02 07:09:21 crc kubenswrapper[4842]: E0202 07:09:21.977150 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b912e45d-72e7-4250-9757-add1efcfb054-operator-scripts podName:b912e45d-72e7-4250-9757-add1efcfb054 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:23.977135838 +0000 UTC m=+1389.354403750 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b912e45d-72e7-4250-9757-add1efcfb054-operator-scripts") pod "root-account-create-update-kl9p2" (UID: "b912e45d-72e7-4250-9757-add1efcfb054") : configmap "openstack-scripts" not found Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.993309 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "679e6e39-029a-452e-a375-bf0b937e3fbe" (UID: "679e6e39-029a-452e-a375-bf0b937e3fbe"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.993472 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-config-data" (OuterVolumeSpecName: "config-data") pod "f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd" (UID: "f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.993639 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "174fcd53-40ab-4d19-a317-bc5cd117d2a4" (UID: "174fcd53-40ab-4d19-a317-bc5cd117d2a4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.993901 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/679e6e39-029a-452e-a375-bf0b937e3fbe-kube-api-access-9lfws" (OuterVolumeSpecName: "kube-api-access-9lfws") pod "679e6e39-029a-452e-a375-bf0b937e3fbe" (UID: "679e6e39-029a-452e-a375-bf0b937e3fbe"). InnerVolumeSpecName "kube-api-access-9lfws". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.993977 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4850512e-bbc8-468d-94ef-1d1be3b0b49c-kube-api-access-58tm7" (OuterVolumeSpecName: "kube-api-access-58tm7") pod "4850512e-bbc8-468d-94ef-1d1be3b0b49c" (UID: "4850512e-bbc8-468d-94ef-1d1be3b0b49c"). InnerVolumeSpecName "kube-api-access-58tm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.001449 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-kube-api-access-k69gq" (OuterVolumeSpecName: "kube-api-access-k69gq") pod "1f94c60e-a4fc-4b7d-96cd-367d46a731c4" (UID: "1f94c60e-a4fc-4b7d-96cd-367d46a731c4"). InnerVolumeSpecName "kube-api-access-k69gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.020603 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c96a7e1-78c3-449d-9200-735db4ee7086" (UID: "6c96a7e1-78c3-449d-9200-735db4ee7086"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.028245 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd" (UID: "f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.029157 4842 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.036440 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-config-data" (OuterVolumeSpecName: "config-data") pod "eb022115-b53a-4ed0-a2a0-b44644dc26a7" (UID: "eb022115-b53a-4ed0-a2a0-b44644dc26a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.039848 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "eb022115-b53a-4ed0-a2a0-b44644dc26a7" (UID: "eb022115-b53a-4ed0-a2a0-b44644dc26a7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.042141 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4850512e-bbc8-468d-94ef-1d1be3b0b49c-config-data" (OuterVolumeSpecName: "config-data") pod "4850512e-bbc8-468d-94ef-1d1be3b0b49c" (UID: "4850512e-bbc8-468d-94ef-1d1be3b0b49c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.045182 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "679e6e39-029a-452e-a375-bf0b937e3fbe" (UID: "679e6e39-029a-452e-a375-bf0b937e3fbe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.045859 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-config-data" (OuterVolumeSpecName: "config-data") pod "6c96a7e1-78c3-449d-9200-735db4ee7086" (UID: "6c96a7e1-78c3-449d-9200-735db4ee7086"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.047629 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "34f55116-a518-4f21-8816-6f8232a6f68d" (UID: "34f55116-a518-4f21-8816-6f8232a6f68d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.047988 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6c96a7e1-78c3-449d-9200-735db4ee7086" (UID: "6c96a7e1-78c3-449d-9200-735db4ee7086"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.052556 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c56025ce-3772-435d-bdba-a4d1ba9d6e2f" (UID: "c56025ce-3772-435d-bdba-a4d1ba9d6e2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.061498 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c56025ce-3772-435d-bdba-a4d1ba9d6e2f" (UID: "c56025ce-3772-435d-bdba-a4d1ba9d6e2f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.069967 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4850512e-bbc8-468d-94ef-1d1be3b0b49c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4850512e-bbc8-468d-94ef-1d1be3b0b49c" (UID: "4850512e-bbc8-468d-94ef-1d1be3b0b49c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.071435 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e4d672b-cb7a-406d-ab62-12745f300ef0-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "2e4d672b-cb7a-406d-ab62-12745f300ef0" (UID: "2e4d672b-cb7a-406d-ab62-12745f300ef0"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.072887 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f94c60e-a4fc-4b7d-96cd-367d46a731c4" (UID: "1f94c60e-a4fc-4b7d-96cd-367d46a731c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078576 4842 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078597 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078608 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58tm7\" (UniqueName: \"kubernetes.io/projected/4850512e-bbc8-468d-94ef-1d1be3b0b49c-kube-api-access-58tm7\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078617 4842 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078626 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078638 4842 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e4d672b-cb7a-406d-ab62-12745f300ef0-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078649 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078660 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4850512e-bbc8-468d-94ef-1d1be3b0b49c-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078672 4842 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078682 4842 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078690 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078700 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k69gq\" (UniqueName: \"kubernetes.io/projected/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-kube-api-access-k69gq\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078709 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078718 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4850512e-bbc8-468d-94ef-1d1be3b0b49c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078726 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078735 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078745 4842 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078753 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078762 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lfws\" (UniqueName: \"kubernetes.io/projected/679e6e39-029a-452e-a375-bf0b937e3fbe-kube-api-access-9lfws\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078773 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078781 4842 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.087108 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "174fcd53-40ab-4d19-a317-bc5cd117d2a4" (UID: "174fcd53-40ab-4d19-a317-bc5cd117d2a4"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.088795 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-config-data" (OuterVolumeSpecName: "config-data") pod "c56025ce-3772-435d-bdba-a4d1ba9d6e2f" (UID: "c56025ce-3772-435d-bdba-a4d1ba9d6e2f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.095612 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c56025ce-3772-435d-bdba-a4d1ba9d6e2f" (UID: "c56025ce-3772-435d-bdba-a4d1ba9d6e2f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.135721 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "174fcd53-40ab-4d19-a317-bc5cd117d2a4" (UID: "174fcd53-40ab-4d19-a317-bc5cd117d2a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.141392 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-config-data" (OuterVolumeSpecName: "config-data") pod "1f94c60e-a4fc-4b7d-96cd-367d46a731c4" (UID: "1f94c60e-a4fc-4b7d-96cd-367d46a731c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.145147 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-config-data" (OuterVolumeSpecName: "config-data") pod "679e6e39-029a-452e-a375-bf0b937e3fbe" (UID: "679e6e39-029a-452e-a375-bf0b937e3fbe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.145377 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-config-data" (OuterVolumeSpecName: "config-data") pod "34f55116-a518-4f21-8816-6f8232a6f68d" (UID: "34f55116-a518-4f21-8816-6f8232a6f68d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.154417 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "eb022115-b53a-4ed0-a2a0-b44644dc26a7" (UID: "eb022115-b53a-4ed0-a2a0-b44644dc26a7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.173155 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-config-data" (OuterVolumeSpecName: "config-data") pod "174fcd53-40ab-4d19-a317-bc5cd117d2a4" (UID: "174fcd53-40ab-4d19-a317-bc5cd117d2a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.180283 4842 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.180316 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.180325 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.180334 4842 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.180345 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.180353 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.180361 4842 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.180384 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.180394 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.315935 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zllm7"] Feb 02 07:09:22 crc kubenswrapper[4842]: W0202 07:09:22.345035 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02f0d774_dbe6_45d5_9ffa_64383c8be0d7.slice/crio-2cbf9ae96d96235341d31a68b4251a05222974fd5545b2aa050455da09a3394e WatchSource:0}: Error finding container 2cbf9ae96d96235341d31a68b4251a05222974fd5545b2aa050455da09a3394e: Status 404 returned error can't find the container with id 2cbf9ae96d96235341d31a68b4251a05222974fd5545b2aa050455da09a3394e Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.369445 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kl9p2" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.482194 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.488116 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b912e45d-72e7-4250-9757-add1efcfb054-operator-scripts\") pod \"b912e45d-72e7-4250-9757-add1efcfb054\" (UID: \"b912e45d-72e7-4250-9757-add1efcfb054\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.488278 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wz2n6\" (UniqueName: \"kubernetes.io/projected/b912e45d-72e7-4250-9757-add1efcfb054-kube-api-access-wz2n6\") pod \"b912e45d-72e7-4250-9757-add1efcfb054\" (UID: \"b912e45d-72e7-4250-9757-add1efcfb054\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.489139 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b912e45d-72e7-4250-9757-add1efcfb054-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b912e45d-72e7-4250-9757-add1efcfb054" (UID: "b912e45d-72e7-4250-9757-add1efcfb054"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.498824 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b912e45d-72e7-4250-9757-add1efcfb054-kube-api-access-wz2n6" (OuterVolumeSpecName: "kube-api-access-wz2n6") pod "b912e45d-72e7-4250-9757-add1efcfb054" (UID: "b912e45d-72e7-4250-9757-add1efcfb054"). InnerVolumeSpecName "kube-api-access-wz2n6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: E0202 07:09:22.512842 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75df0dcbbbe53a8b55947d6010ee6f966cc34b098ea07e3b90fcd36b98f46fc4" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 02 07:09:22 crc kubenswrapper[4842]: E0202 07:09:22.514066 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75df0dcbbbe53a8b55947d6010ee6f966cc34b098ea07e3b90fcd36b98f46fc4" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 02 07:09:22 crc kubenswrapper[4842]: E0202 07:09:22.515864 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75df0dcbbbe53a8b55947d6010ee6f966cc34b098ea07e3b90fcd36b98f46fc4" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 02 07:09:22 crc kubenswrapper[4842]: E0202 07:09:22.515891 4842 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="cbda1f81-b862-4ee7-84ce-590c353e4d5b" containerName="nova-cell0-conductor-conductor" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.558795 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1f94c60e-a4fc-4b7d-96cd-367d46a731c4","Type":"ContainerDied","Data":"95e75a79dbca9de8ff0edaf83bbf9a981efefb176ab75feebb5919ac4f34c81f"} Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.558849 4842 scope.go:117] "RemoveContainer" containerID="aa3abfa94e116973782248416ac6de3799758150d193f7dbb95e6a13e34381cc" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.558947 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.564805 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4850512e-bbc8-468d-94ef-1d1be3b0b49c","Type":"ContainerDied","Data":"f8175b6df5dfbdeb4f2b96118c96bb8462df0286a53b3bdcaea8cf46054c0053"} Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.564878 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.579519 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kl9p2" event={"ID":"b912e45d-72e7-4250-9757-add1efcfb054","Type":"ContainerDied","Data":"c436c98ac030592508317571235d4b580f2fca45d60bf44a940ecdb59f089266"} Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.579627 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kl9p2" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.583103 4842 scope.go:117] "RemoveContainer" containerID="b02a597eaa6f312a54cab57cb22a7ba5718d1a52db99c582f4e0031ffecbffc2" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.589290 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/709c39fb-802f-4690-89f6-41a717e7244c-galera-tls-certs\") pod \"709c39fb-802f-4690-89f6-41a717e7244c\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.589322 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-848c6\" (UniqueName: \"kubernetes.io/projected/709c39fb-802f-4690-89f6-41a717e7244c-kube-api-access-848c6\") pod \"709c39fb-802f-4690-89f6-41a717e7244c\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.589402 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/709c39fb-802f-4690-89f6-41a717e7244c-config-data-generated\") pod \"709c39fb-802f-4690-89f6-41a717e7244c\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.589432 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"709c39fb-802f-4690-89f6-41a717e7244c\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.589452 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-config-data-default\") pod \"709c39fb-802f-4690-89f6-41a717e7244c\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.589524 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-kolla-config\") pod \"709c39fb-802f-4690-89f6-41a717e7244c\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.589545 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-operator-scripts\") pod \"709c39fb-802f-4690-89f6-41a717e7244c\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.589693 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/709c39fb-802f-4690-89f6-41a717e7244c-combined-ca-bundle\") pod \"709c39fb-802f-4690-89f6-41a717e7244c\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.590096 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b912e45d-72e7-4250-9757-add1efcfb054-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.590131 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wz2n6\" (UniqueName: \"kubernetes.io/projected/b912e45d-72e7-4250-9757-add1efcfb054-kube-api-access-wz2n6\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.592113 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "709c39fb-802f-4690-89f6-41a717e7244c" (UID: "709c39fb-802f-4690-89f6-41a717e7244c"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.592080 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/709c39fb-802f-4690-89f6-41a717e7244c-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "709c39fb-802f-4690-89f6-41a717e7244c" (UID: "709c39fb-802f-4690-89f6-41a717e7244c"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.592694 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "709c39fb-802f-4690-89f6-41a717e7244c" (UID: "709c39fb-802f-4690-89f6-41a717e7244c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.592814 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "709c39fb-802f-4690-89f6-41a717e7244c" (UID: "709c39fb-802f-4690-89f6-41a717e7244c"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.601372 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"174fcd53-40ab-4d19-a317-bc5cd117d2a4","Type":"ContainerDied","Data":"dc072634ce1fdc7d7f270a2d47917083559fd131ffec946966f43f1f6581f8f4"} Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.601465 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.606686 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" event={"ID":"679e6e39-029a-452e-a375-bf0b937e3fbe","Type":"ContainerDied","Data":"eb1c879ce0521868ffea7d5ca4ba1e741e4b7c55bb4a6410da53f5413323bc74"} Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.606797 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.608534 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6064786a-fa53-47a7-88ee-384cf70a86c6/ovn-northd/0.log" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.609413 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.620268 4842 scope.go:117] "RemoveContainer" containerID="13000d6307279a8f1879b7fd7be84a407943a9cc3066fff0cf9a626a1678f240" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.620574 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zllm7" event={"ID":"02f0d774-dbe6-45d5-9ffa-64383c8be0d7","Type":"ContainerStarted","Data":"2cbf9ae96d96235341d31a68b4251a05222974fd5545b2aa050455da09a3394e"} Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.625129 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/709c39fb-802f-4690-89f6-41a717e7244c-kube-api-access-848c6" (OuterVolumeSpecName: "kube-api-access-848c6") pod "709c39fb-802f-4690-89f6-41a717e7244c" (UID: "709c39fb-802f-4690-89f6-41a717e7244c"). InnerVolumeSpecName "kube-api-access-848c6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.625600 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.631118 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6064786a-fa53-47a7-88ee-384cf70a86c6/ovn-northd/0.log" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.631158 4842 generic.go:334] "Generic (PLEG): container finished" podID="6064786a-fa53-47a7-88ee-384cf70a86c6" containerID="6b0de6a9b1a36bc3d2910cbd8bed0ec4d6b0a971b7c05c08ccf5a0c3fa8afa6c" exitCode=139 Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.631581 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6064786a-fa53-47a7-88ee-384cf70a86c6","Type":"ContainerDied","Data":"6b0de6a9b1a36bc3d2910cbd8bed0ec4d6b0a971b7c05c08ccf5a0c3fa8afa6c"} Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.631715 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.646465 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/709c39fb-802f-4690-89f6-41a717e7244c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "709c39fb-802f-4690-89f6-41a717e7244c" (UID: "709c39fb-802f-4690-89f6-41a717e7244c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.655306 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.660070 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.660485 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-57cc9f4749-jxzrq" event={"ID":"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd","Type":"ContainerDied","Data":"1a2fdbaaf7cba0dd3058c59daa47fefc2d3624684698fe684e8a50e2db075890"} Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.680976 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.691172 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/709c39fb-802f-4690-89f6-41a717e7244c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.691199 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-848c6\" (UniqueName: \"kubernetes.io/projected/709c39fb-802f-4690-89f6-41a717e7244c-kube-api-access-848c6\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.691209 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/709c39fb-802f-4690-89f6-41a717e7244c-config-data-generated\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.691230 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-config-data-default\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.691238 4842 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-kolla-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.691247 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.698867 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/709c39fb-802f-4690-89f6-41a717e7244c-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "709c39fb-802f-4690-89f6-41a717e7244c" (UID: "709c39fb-802f-4690-89f6-41a717e7244c"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.700473 4842 generic.go:334] "Generic (PLEG): container finished" podID="709c39fb-802f-4690-89f6-41a717e7244c" containerID="c560cf8ca46605a269f576b719a4cf3ca939b8e2744573792764df19d7522c8c" exitCode=0 Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.700554 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"709c39fb-802f-4690-89f6-41a717e7244c","Type":"ContainerDied","Data":"c560cf8ca46605a269f576b719a4cf3ca939b8e2744573792764df19d7522c8c"} Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.700615 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.700684 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.700761 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2348-account-create-update-j8g5r" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.700791 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8e42-account-create-update-pssf7" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.700821 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.700843 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85ce-account-create-update-szhp5" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.700868 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.700895 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-716d-account-create-update-x4f2v" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.700922 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.700948 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.700975 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bfdd-account-create-update-z7blt" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.700975 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"709c39fb-802f-4690-89f6-41a717e7244c","Type":"ContainerDied","Data":"b0c718acbfc7b29da36fd02c7d5b494cfe5ffb0fab4eeaa9d4ac6e1362b5ae3e"} Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.701059 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.710002 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "mysql-db") pod "709c39fb-802f-4690-89f6-41a717e7244c" (UID: "709c39fb-802f-4690-89f6-41a717e7244c"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.711863 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.721875 4842 scope.go:117] "RemoveContainer" containerID="bad70e2dba666c009e7972d01ff11c1b18b18e47b07343dcd24db229c935fcc3" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.739102 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-kl9p2"] Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.744543 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-kl9p2"] Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.792704 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6064786a-fa53-47a7-88ee-384cf70a86c6-scripts\") pod \"6064786a-fa53-47a7-88ee-384cf70a86c6\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.792762 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6064786a-fa53-47a7-88ee-384cf70a86c6-config\") pod \"6064786a-fa53-47a7-88ee-384cf70a86c6\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.793176 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6064786a-fa53-47a7-88ee-384cf70a86c6-scripts" (OuterVolumeSpecName: "scripts") pod "6064786a-fa53-47a7-88ee-384cf70a86c6" (UID: "6064786a-fa53-47a7-88ee-384cf70a86c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.793314 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6064786a-fa53-47a7-88ee-384cf70a86c6-ovn-rundir\") pod \"6064786a-fa53-47a7-88ee-384cf70a86c6\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.793334 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-metrics-certs-tls-certs\") pod \"6064786a-fa53-47a7-88ee-384cf70a86c6\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.793349 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6064786a-fa53-47a7-88ee-384cf70a86c6-config" (OuterVolumeSpecName: "config") pod "6064786a-fa53-47a7-88ee-384cf70a86c6" (UID: "6064786a-fa53-47a7-88ee-384cf70a86c6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.793598 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6064786a-fa53-47a7-88ee-384cf70a86c6-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "6064786a-fa53-47a7-88ee-384cf70a86c6" (UID: "6064786a-fa53-47a7-88ee-384cf70a86c6"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.793666 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-ovn-northd-tls-certs\") pod \"6064786a-fa53-47a7-88ee-384cf70a86c6\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.793693 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qdwq\" (UniqueName: \"kubernetes.io/projected/6064786a-fa53-47a7-88ee-384cf70a86c6-kube-api-access-4qdwq\") pod \"6064786a-fa53-47a7-88ee-384cf70a86c6\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.793990 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-combined-ca-bundle\") pod \"6064786a-fa53-47a7-88ee-384cf70a86c6\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.794341 4842 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/709c39fb-802f-4690-89f6-41a717e7244c-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.794352 4842 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6064786a-fa53-47a7-88ee-384cf70a86c6-ovn-rundir\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.794369 4842 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.794378 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6064786a-fa53-47a7-88ee-384cf70a86c6-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.794388 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6064786a-fa53-47a7-88ee-384cf70a86c6-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.798481 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6064786a-fa53-47a7-88ee-384cf70a86c6-kube-api-access-4qdwq" (OuterVolumeSpecName: "kube-api-access-4qdwq") pod "6064786a-fa53-47a7-88ee-384cf70a86c6" (UID: "6064786a-fa53-47a7-88ee-384cf70a86c6"). InnerVolumeSpecName "kube-api-access-4qdwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.826552 4842 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.838410 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6064786a-fa53-47a7-88ee-384cf70a86c6" (UID: "6064786a-fa53-47a7-88ee-384cf70a86c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.838601 4842 scope.go:117] "RemoveContainer" containerID="4bae417047baf6bf846e8de15338ba7207499db97e8d990c0e70145588c621ef" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.847135 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-77c4859bf4-qzmpm"] Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.857243 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-77c4859bf4-qzmpm"] Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.869686 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.879524 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.891548 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-57cc9f4749-jxzrq"] Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.895933 4842 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.895958 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qdwq\" (UniqueName: \"kubernetes.io/projected/6064786a-fa53-47a7-88ee-384cf70a86c6-kube-api-access-4qdwq\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.895968 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: E0202 07:09:22.896025 4842 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 02 07:09:22 crc kubenswrapper[4842]: E0202 07:09:22.896072 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-config-data podName:2b2ca532-dbbc-4148-8d2f-fc474685f0bd nodeName:}" failed. No retries permitted until 2026-02-02 07:09:30.896057314 +0000 UTC m=+1396.273325226 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-config-data") pod "rabbitmq-server-0" (UID: "2b2ca532-dbbc-4148-8d2f-fc474685f0bd") : configmap "rabbitmq-config-data" not found Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.911367 4842 scope.go:117] "RemoveContainer" containerID="b1e2b0db828452447ced8622fe6dcff41213b22d66d8c13c96258aefe2a29db1" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.912931 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-57cc9f4749-jxzrq"] Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.917119 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "6064786a-fa53-47a7-88ee-384cf70a86c6" (UID: "6064786a-fa53-47a7-88ee-384cf70a86c6"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.929988 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5b5c67fdbd-zsx96"] Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.938501 4842 scope.go:117] "RemoveContainer" containerID="454fd5e306d51498a984d5077e2446e7c6cf9f4c21170f227c52179104c4a621" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.941388 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "6064786a-fa53-47a7-88ee-384cf70a86c6" (UID: "6064786a-fa53-47a7-88ee-384cf70a86c6"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.945839 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5b5c67fdbd-zsx96"] Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.969810 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.975820 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.986757 4842 scope.go:117] "RemoveContainer" containerID="aee85aee5516dd19e05e53144d572bf0aa1bff0b09c36ebb0b91fd8f463420c6" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.997417 4842 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.997440 4842 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.031733 4842 scope.go:117] "RemoveContainer" containerID="5a24327ba4517226f20e20f0a45585d27dd9a1490c6050d591f1638384be7d6d" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.064400 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-2348-account-create-update-j8g5r"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.076139 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-2348-account-create-update-j8g5r"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.077316 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.081367 4842 scope.go:117] "RemoveContainer" containerID="e96862cf77fa128f12f3b9982dfad78848395bebaf2c0c3ff7a1cca181e725f0" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.088935 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.098721 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.127343 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.148436 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-85ce-account-create-update-szhp5"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.158962 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-85ce-account-create-update-szhp5"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.173988 4842 scope.go:117] "RemoveContainer" containerID="6b0de6a9b1a36bc3d2910cbd8bed0ec4d6b0a971b7c05c08ccf5a0c3fa8afa6c" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.175109 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.181205 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-716d-account-create-update-x4f2v"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.194321 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-716d-account-create-update-x4f2v"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.208467 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.208673 4842 scope.go:117] "RemoveContainer" containerID="36bc22b70997be0e1a4613b0f92eaab2935de0d49964ada65b21f18ae7b1478b" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.239412 4842 scope.go:117] "RemoveContainer" containerID="2a1ff124f28b987212a2f7ed64a1bf208d310f3e9f13e80b4572c2dce5f8a5f9" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.253721 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.264428 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5cc5c967fd-w6ljx"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.265431 4842 scope.go:117] "RemoveContainer" containerID="c560cf8ca46605a269f576b719a4cf3ca939b8e2744573792764df19d7522c8c" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.269852 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5cc5c967fd-w6ljx"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.286925 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bfdd-account-create-update-z7blt"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.293124 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-bfdd-account-create-update-z7blt"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.293455 4842 scope.go:117] "RemoveContainer" containerID="97ba3917d42f55e5202587bc21acaf8c4c98f2515894b36ef8743fca56ae4a0d" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.297938 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.312311 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-scripts\") pod \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.312396 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-public-tls-certs\") pod \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.312428 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-fernet-keys\") pod \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.312469 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-457v8\" (UniqueName: \"kubernetes.io/projected/7343dd67-a085-4da9-8d79-f25ea1e20ca6-kube-api-access-457v8\") pod \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.312597 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-credential-keys\") pod \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.312643 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-config-data\") pod \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.312694 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-internal-tls-certs\") pod \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.312741 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-combined-ca-bundle\") pod \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.315159 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.320425 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7343dd67-a085-4da9-8d79-f25ea1e20ca6-kube-api-access-457v8" (OuterVolumeSpecName: "kube-api-access-457v8") pod "7343dd67-a085-4da9-8d79-f25ea1e20ca6" (UID: "7343dd67-a085-4da9-8d79-f25ea1e20ca6"). InnerVolumeSpecName "kube-api-access-457v8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.320646 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-scripts" (OuterVolumeSpecName: "scripts") pod "7343dd67-a085-4da9-8d79-f25ea1e20ca6" (UID: "7343dd67-a085-4da9-8d79-f25ea1e20ca6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.325955 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7343dd67-a085-4da9-8d79-f25ea1e20ca6" (UID: "7343dd67-a085-4da9-8d79-f25ea1e20ca6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.333981 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7343dd67-a085-4da9-8d79-f25ea1e20ca6" (UID: "7343dd67-a085-4da9-8d79-f25ea1e20ca6"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.334743 4842 scope.go:117] "RemoveContainer" containerID="c560cf8ca46605a269f576b719a4cf3ca939b8e2744573792764df19d7522c8c" Feb 02 07:09:23 crc kubenswrapper[4842]: E0202 07:09:23.335140 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c560cf8ca46605a269f576b719a4cf3ca939b8e2744573792764df19d7522c8c\": container with ID starting with c560cf8ca46605a269f576b719a4cf3ca939b8e2744573792764df19d7522c8c not found: ID does not exist" containerID="c560cf8ca46605a269f576b719a4cf3ca939b8e2744573792764df19d7522c8c" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.335180 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c560cf8ca46605a269f576b719a4cf3ca939b8e2744573792764df19d7522c8c"} err="failed to get container status \"c560cf8ca46605a269f576b719a4cf3ca939b8e2744573792764df19d7522c8c\": rpc error: code = NotFound desc = could not find container \"c560cf8ca46605a269f576b719a4cf3ca939b8e2744573792764df19d7522c8c\": container with ID starting with c560cf8ca46605a269f576b719a4cf3ca939b8e2744573792764df19d7522c8c not found: ID does not exist" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.335242 4842 scope.go:117] "RemoveContainer" containerID="97ba3917d42f55e5202587bc21acaf8c4c98f2515894b36ef8743fca56ae4a0d" Feb 02 07:09:23 crc kubenswrapper[4842]: E0202 07:09:23.336291 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97ba3917d42f55e5202587bc21acaf8c4c98f2515894b36ef8743fca56ae4a0d\": container with ID starting with 97ba3917d42f55e5202587bc21acaf8c4c98f2515894b36ef8743fca56ae4a0d not found: ID does not exist" containerID="97ba3917d42f55e5202587bc21acaf8c4c98f2515894b36ef8743fca56ae4a0d" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.336313 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97ba3917d42f55e5202587bc21acaf8c4c98f2515894b36ef8743fca56ae4a0d"} err="failed to get container status \"97ba3917d42f55e5202587bc21acaf8c4c98f2515894b36ef8743fca56ae4a0d\": rpc error: code = NotFound desc = could not find container \"97ba3917d42f55e5202587bc21acaf8c4c98f2515894b36ef8743fca56ae4a0d\": container with ID starting with 97ba3917d42f55e5202587bc21acaf8c4c98f2515894b36ef8743fca56ae4a0d not found: ID does not exist" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.339943 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8e42-account-create-update-pssf7"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.351575 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7343dd67-a085-4da9-8d79-f25ea1e20ca6" (UID: "7343dd67-a085-4da9-8d79-f25ea1e20ca6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.352601 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-8e42-account-create-update-pssf7"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.370450 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.375529 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.376116 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7343dd67-a085-4da9-8d79-f25ea1e20ca6" (UID: "7343dd67-a085-4da9-8d79-f25ea1e20ca6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.382563 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-config-data" (OuterVolumeSpecName: "config-data") pod "7343dd67-a085-4da9-8d79-f25ea1e20ca6" (UID: "7343dd67-a085-4da9-8d79-f25ea1e20ca6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.399591 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7343dd67-a085-4da9-8d79-f25ea1e20ca6" (UID: "7343dd67-a085-4da9-8d79-f25ea1e20ca6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.414406 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.414437 4842 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.414450 4842 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.414461 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-457v8\" (UniqueName: \"kubernetes.io/projected/7343dd67-a085-4da9-8d79-f25ea1e20ca6-kube-api-access-457v8\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.414470 4842 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.414478 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.414486 4842 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.414494 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.459594 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" path="/var/lib/kubelet/pods/174fcd53-40ab-4d19-a317-bc5cd117d2a4/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.460518 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f94c60e-a4fc-4b7d-96cd-367d46a731c4" path="/var/lib/kubelet/pods/1f94c60e-a4fc-4b7d-96cd-367d46a731c4/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.461193 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25609b1c-e1e9-4633-b3e3-93bd2f4396de" path="/var/lib/kubelet/pods/25609b1c-e1e9-4633-b3e3-93bd2f4396de/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.462847 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e4d672b-cb7a-406d-ab62-12745f300ef0" path="/var/lib/kubelet/pods/2e4d672b-cb7a-406d-ab62-12745f300ef0/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.463760 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34f55116-a518-4f21-8816-6f8232a6f68d" path="/var/lib/kubelet/pods/34f55116-a518-4f21-8816-6f8232a6f68d/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.465279 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4850512e-bbc8-468d-94ef-1d1be3b0b49c" path="/var/lib/kubelet/pods/4850512e-bbc8-468d-94ef-1d1be3b0b49c/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.465983 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54aa018a-3e7e-4c95-9c1d-387543ed5af0" path="/var/lib/kubelet/pods/54aa018a-3e7e-4c95-9c1d-387543ed5af0/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.466796 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6064786a-fa53-47a7-88ee-384cf70a86c6" path="/var/lib/kubelet/pods/6064786a-fa53-47a7-88ee-384cf70a86c6/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.468210 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="679e6e39-029a-452e-a375-bf0b937e3fbe" path="/var/lib/kubelet/pods/679e6e39-029a-452e-a375-bf0b937e3fbe/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.469211 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b11cfdf-ed7a-48ce-97eb-e03cd6be314c" path="/var/lib/kubelet/pods/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.470028 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c96a7e1-78c3-449d-9200-735db4ee7086" path="/var/lib/kubelet/pods/6c96a7e1-78c3-449d-9200-735db4ee7086/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.472979 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="709c39fb-802f-4690-89f6-41a717e7244c" path="/var/lib/kubelet/pods/709c39fb-802f-4690-89f6-41a717e7244c/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.473742 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79d5e0a1-8df4-4db1-aaf8-0d253163a522" path="/var/lib/kubelet/pods/79d5e0a1-8df4-4db1-aaf8-0d253163a522/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.474182 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e3e639-93f4-48d1-8a2f-89e48bcc5f1d" path="/var/lib/kubelet/pods/81e3e639-93f4-48d1-8a2f-89e48bcc5f1d/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.474627 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90821e80-1367-4cf6-8087-fb83507223ec" path="/var/lib/kubelet/pods/90821e80-1367-4cf6-8087-fb83507223ec/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.475583 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92090cd2-6d30-4aec-81a2-f7d41c40b52d" path="/var/lib/kubelet/pods/92090cd2-6d30-4aec-81a2-f7d41c40b52d/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.476007 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b912e45d-72e7-4250-9757-add1efcfb054" path="/var/lib/kubelet/pods/b912e45d-72e7-4250-9757-add1efcfb054/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.476714 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c56025ce-3772-435d-bdba-a4d1ba9d6e2f" path="/var/lib/kubelet/pods/c56025ce-3772-435d-bdba-a4d1ba9d6e2f/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.477274 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db5059ce-9214-449d-a8d5-1b6ab7447e65" path="/var/lib/kubelet/pods/db5059ce-9214-449d-a8d5-1b6ab7447e65/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.478193 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e91519e6-bf55-4c08-8274-1d8a59f1ff52" path="/var/lib/kubelet/pods/e91519e6-bf55-4c08-8274-1d8a59f1ff52/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.478694 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb022115-b53a-4ed0-a2a0-b44644dc26a7" path="/var/lib/kubelet/pods/eb022115-b53a-4ed0-a2a0-b44644dc26a7/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.479636 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd" path="/var/lib/kubelet/pods/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.689424 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.714770 4842 generic.go:334] "Generic (PLEG): container finished" podID="441d47f7-e5dd-456f-b6fa-10a642be6742" containerID="3913ec835fcef00ab7ba5cfa0bb102b1d808857fbee96be0da99ede67f9672b5" exitCode=0 Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.714833 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"441d47f7-e5dd-456f-b6fa-10a642be6742","Type":"ContainerDied","Data":"3913ec835fcef00ab7ba5cfa0bb102b1d808857fbee96be0da99ede67f9672b5"} Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.717727 4842 generic.go:334] "Generic (PLEG): container finished" podID="2b2ca532-dbbc-4148-8d2f-fc474685f0bd" containerID="384f2467730e80d894550b124ee5d4d50ba8cf40b6a9c5e38ab8a7bf9548ea2d" exitCode=0 Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.717921 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.718163 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2b2ca532-dbbc-4148-8d2f-fc474685f0bd","Type":"ContainerDied","Data":"384f2467730e80d894550b124ee5d4d50ba8cf40b6a9c5e38ab8a7bf9548ea2d"} Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.718239 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2b2ca532-dbbc-4148-8d2f-fc474685f0bd","Type":"ContainerDied","Data":"63d0cfdfa17eb71cf318213bce11d52e23291a7b7ab17f960100e6c0aabd0b83"} Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.718266 4842 scope.go:117] "RemoveContainer" containerID="384f2467730e80d894550b124ee5d4d50ba8cf40b6a9c5e38ab8a7bf9548ea2d" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.726573 4842 generic.go:334] "Generic (PLEG): container finished" podID="02f0d774-dbe6-45d5-9ffa-64383c8be0d7" containerID="b6fbbeefaf6c662fb9dc489fefb6fc893e73cc0665f964e826ce195432515869" exitCode=0 Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.726651 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zllm7" event={"ID":"02f0d774-dbe6-45d5-9ffa-64383c8be0d7","Type":"ContainerDied","Data":"b6fbbeefaf6c662fb9dc489fefb6fc893e73cc0665f964e826ce195432515869"} Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.736851 4842 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.756727 4842 generic.go:334] "Generic (PLEG): container finished" podID="7343dd67-a085-4da9-8d79-f25ea1e20ca6" containerID="4e6d71c03ef27703f095692cfb9e2c5680467263aa934bc2fe4e56b094edd765" exitCode=0 Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.756793 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cd7d86b6c-rcdjq" event={"ID":"7343dd67-a085-4da9-8d79-f25ea1e20ca6","Type":"ContainerDied","Data":"4e6d71c03ef27703f095692cfb9e2c5680467263aa934bc2fe4e56b094edd765"} Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.757139 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cd7d86b6c-rcdjq" event={"ID":"7343dd67-a085-4da9-8d79-f25ea1e20ca6","Type":"ContainerDied","Data":"0a8707912ffa5b95a33e852a86d3ad76fb5ed5f7a33153be252e8d6c15cbbb8d"} Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.756819 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.763097 4842 scope.go:117] "RemoveContainer" containerID="6c31731dd55c0106a8a51f84c9feb372cb01a4a0f209022835cbd8f0c40ce80b" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.782438 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-cd7d86b6c-rcdjq"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.790276 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-cd7d86b6c-rcdjq"] Feb 02 07:09:23 crc kubenswrapper[4842]: E0202 07:09:23.798739 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:23 crc kubenswrapper[4842]: E0202 07:09:23.799175 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.800707 4842 scope.go:117] "RemoveContainer" containerID="384f2467730e80d894550b124ee5d4d50ba8cf40b6a9c5e38ab8a7bf9548ea2d" Feb 02 07:09:23 crc kubenswrapper[4842]: E0202 07:09:23.800793 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:23 crc kubenswrapper[4842]: E0202 07:09:23.800817 4842 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-vctt8" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovsdb-server" Feb 02 07:09:23 crc kubenswrapper[4842]: E0202 07:09:23.801481 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:23 crc kubenswrapper[4842]: E0202 07:09:23.801571 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"384f2467730e80d894550b124ee5d4d50ba8cf40b6a9c5e38ab8a7bf9548ea2d\": container with ID starting with 384f2467730e80d894550b124ee5d4d50ba8cf40b6a9c5e38ab8a7bf9548ea2d not found: ID does not exist" containerID="384f2467730e80d894550b124ee5d4d50ba8cf40b6a9c5e38ab8a7bf9548ea2d" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.801611 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"384f2467730e80d894550b124ee5d4d50ba8cf40b6a9c5e38ab8a7bf9548ea2d"} err="failed to get container status \"384f2467730e80d894550b124ee5d4d50ba8cf40b6a9c5e38ab8a7bf9548ea2d\": rpc error: code = NotFound desc = could not find container \"384f2467730e80d894550b124ee5d4d50ba8cf40b6a9c5e38ab8a7bf9548ea2d\": container with ID starting with 384f2467730e80d894550b124ee5d4d50ba8cf40b6a9c5e38ab8a7bf9548ea2d not found: ID does not exist" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.801635 4842 scope.go:117] "RemoveContainer" containerID="6c31731dd55c0106a8a51f84c9feb372cb01a4a0f209022835cbd8f0c40ce80b" Feb 02 07:09:23 crc kubenswrapper[4842]: E0202 07:09:23.804508 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c31731dd55c0106a8a51f84c9feb372cb01a4a0f209022835cbd8f0c40ce80b\": container with ID starting with 6c31731dd55c0106a8a51f84c9feb372cb01a4a0f209022835cbd8f0c40ce80b not found: ID does not exist" containerID="6c31731dd55c0106a8a51f84c9feb372cb01a4a0f209022835cbd8f0c40ce80b" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.804538 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c31731dd55c0106a8a51f84c9feb372cb01a4a0f209022835cbd8f0c40ce80b"} err="failed to get container status \"6c31731dd55c0106a8a51f84c9feb372cb01a4a0f209022835cbd8f0c40ce80b\": rpc error: code = NotFound desc = could not find container \"6c31731dd55c0106a8a51f84c9feb372cb01a4a0f209022835cbd8f0c40ce80b\": container with ID starting with 6c31731dd55c0106a8a51f84c9feb372cb01a4a0f209022835cbd8f0c40ce80b not found: ID does not exist" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.804558 4842 scope.go:117] "RemoveContainer" containerID="4e6d71c03ef27703f095692cfb9e2c5680467263aa934bc2fe4e56b094edd765" Feb 02 07:09:23 crc kubenswrapper[4842]: E0202 07:09:23.804578 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:23 crc kubenswrapper[4842]: E0202 07:09:23.807939 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:23 crc kubenswrapper[4842]: E0202 07:09:23.807998 4842 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-vctt8" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovs-vswitchd" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.819809 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-config-data\") pod \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.819956 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-erlang-cookie-secret\") pod \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.820019 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-tls\") pod \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.820067 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-plugins-conf\") pod \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.820138 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ttm4\" (UniqueName: \"kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-kube-api-access-9ttm4\") pod \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.820159 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-server-conf\") pod \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.820186 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.820234 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-pod-info\") pod \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.820269 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-erlang-cookie\") pod \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.820297 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-plugins\") pod \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.820320 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-confd\") pod \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.820617 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2b2ca532-dbbc-4148-8d2f-fc474685f0bd" (UID: "2b2ca532-dbbc-4148-8d2f-fc474685f0bd"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.821052 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2b2ca532-dbbc-4148-8d2f-fc474685f0bd" (UID: "2b2ca532-dbbc-4148-8d2f-fc474685f0bd"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.821069 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2b2ca532-dbbc-4148-8d2f-fc474685f0bd" (UID: "2b2ca532-dbbc-4148-8d2f-fc474685f0bd"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.829329 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2b2ca532-dbbc-4148-8d2f-fc474685f0bd" (UID: "2b2ca532-dbbc-4148-8d2f-fc474685f0bd"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.829338 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "2b2ca532-dbbc-4148-8d2f-fc474685f0bd" (UID: "2b2ca532-dbbc-4148-8d2f-fc474685f0bd"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.830241 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "2b2ca532-dbbc-4148-8d2f-fc474685f0bd" (UID: "2b2ca532-dbbc-4148-8d2f-fc474685f0bd"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.832727 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-kube-api-access-9ttm4" (OuterVolumeSpecName: "kube-api-access-9ttm4") pod "2b2ca532-dbbc-4148-8d2f-fc474685f0bd" (UID: "2b2ca532-dbbc-4148-8d2f-fc474685f0bd"). InnerVolumeSpecName "kube-api-access-9ttm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.833345 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-pod-info" (OuterVolumeSpecName: "pod-info") pod "2b2ca532-dbbc-4148-8d2f-fc474685f0bd" (UID: "2b2ca532-dbbc-4148-8d2f-fc474685f0bd"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.834731 4842 scope.go:117] "RemoveContainer" containerID="4e6d71c03ef27703f095692cfb9e2c5680467263aa934bc2fe4e56b094edd765" Feb 02 07:09:23 crc kubenswrapper[4842]: E0202 07:09:23.838402 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e6d71c03ef27703f095692cfb9e2c5680467263aa934bc2fe4e56b094edd765\": container with ID starting with 4e6d71c03ef27703f095692cfb9e2c5680467263aa934bc2fe4e56b094edd765 not found: ID does not exist" containerID="4e6d71c03ef27703f095692cfb9e2c5680467263aa934bc2fe4e56b094edd765" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.838438 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e6d71c03ef27703f095692cfb9e2c5680467263aa934bc2fe4e56b094edd765"} err="failed to get container status \"4e6d71c03ef27703f095692cfb9e2c5680467263aa934bc2fe4e56b094edd765\": rpc error: code = NotFound desc = could not find container \"4e6d71c03ef27703f095692cfb9e2c5680467263aa934bc2fe4e56b094edd765\": container with ID starting with 4e6d71c03ef27703f095692cfb9e2c5680467263aa934bc2fe4e56b094edd765 not found: ID does not exist" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.850040 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-config-data" (OuterVolumeSpecName: "config-data") pod "2b2ca532-dbbc-4148-8d2f-fc474685f0bd" (UID: "2b2ca532-dbbc-4148-8d2f-fc474685f0bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.864330 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-server-conf" (OuterVolumeSpecName: "server-conf") pod "2b2ca532-dbbc-4148-8d2f-fc474685f0bd" (UID: "2b2ca532-dbbc-4148-8d2f-fc474685f0bd"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.894710 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6684555597-gjtgz" podUID="953bf671-ca79-4208-9bab-672dc079dd82" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.168:9696/\": dial tcp 10.217.0.168:9696: connect: connection refused" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.917878 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2b2ca532-dbbc-4148-8d2f-fc474685f0bd" (UID: "2b2ca532-dbbc-4148-8d2f-fc474685f0bd"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.921766 4842 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.921792 4842 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.921803 4842 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.921814 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ttm4\" (UniqueName: \"kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-kube-api-access-9ttm4\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.921827 4842 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-server-conf\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.921853 4842 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.921863 4842 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-pod-info\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.921876 4842 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.921886 4842 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.921896 4842 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.921905 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.943698 4842 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.003931 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.024706 4842 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:24 crc kubenswrapper[4842]: E0202 07:09:24.024805 4842 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 02 07:09:24 crc kubenswrapper[4842]: E0202 07:09:24.025290 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-config-data podName:441d47f7-e5dd-456f-b6fa-10a642be6742 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:32.025274217 +0000 UTC m=+1397.402542129 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-config-data") pod "rabbitmq-cell1-server-0" (UID: "441d47f7-e5dd-456f-b6fa-10a642be6742") : configmap "rabbitmq-cell1-config-data" not found Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.056518 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.067091 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.125343 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-erlang-cookie\") pod \"441d47f7-e5dd-456f-b6fa-10a642be6742\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.125422 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/441d47f7-e5dd-456f-b6fa-10a642be6742-erlang-cookie-secret\") pod \"441d47f7-e5dd-456f-b6fa-10a642be6742\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.125447 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/441d47f7-e5dd-456f-b6fa-10a642be6742-pod-info\") pod \"441d47f7-e5dd-456f-b6fa-10a642be6742\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.125484 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-server-conf\") pod \"441d47f7-e5dd-456f-b6fa-10a642be6742\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.125505 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-plugins-conf\") pod \"441d47f7-e5dd-456f-b6fa-10a642be6742\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.125535 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n8dl\" (UniqueName: \"kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-kube-api-access-9n8dl\") pod \"441d47f7-e5dd-456f-b6fa-10a642be6742\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.125564 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"441d47f7-e5dd-456f-b6fa-10a642be6742\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.125594 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-confd\") pod \"441d47f7-e5dd-456f-b6fa-10a642be6742\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.125628 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-config-data\") pod \"441d47f7-e5dd-456f-b6fa-10a642be6742\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.125671 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-tls\") pod \"441d47f7-e5dd-456f-b6fa-10a642be6742\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.125700 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-plugins\") pod \"441d47f7-e5dd-456f-b6fa-10a642be6742\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.126199 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "441d47f7-e5dd-456f-b6fa-10a642be6742" (UID: "441d47f7-e5dd-456f-b6fa-10a642be6742"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.126454 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "441d47f7-e5dd-456f-b6fa-10a642be6742" (UID: "441d47f7-e5dd-456f-b6fa-10a642be6742"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.127776 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "441d47f7-e5dd-456f-b6fa-10a642be6742" (UID: "441d47f7-e5dd-456f-b6fa-10a642be6742"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.129402 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/441d47f7-e5dd-456f-b6fa-10a642be6742-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "441d47f7-e5dd-456f-b6fa-10a642be6742" (UID: "441d47f7-e5dd-456f-b6fa-10a642be6742"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.129899 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "441d47f7-e5dd-456f-b6fa-10a642be6742" (UID: "441d47f7-e5dd-456f-b6fa-10a642be6742"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.130425 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/441d47f7-e5dd-456f-b6fa-10a642be6742-pod-info" (OuterVolumeSpecName: "pod-info") pod "441d47f7-e5dd-456f-b6fa-10a642be6742" (UID: "441d47f7-e5dd-456f-b6fa-10a642be6742"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.130697 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "441d47f7-e5dd-456f-b6fa-10a642be6742" (UID: "441d47f7-e5dd-456f-b6fa-10a642be6742"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.144034 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-config-data" (OuterVolumeSpecName: "config-data") pod "441d47f7-e5dd-456f-b6fa-10a642be6742" (UID: "441d47f7-e5dd-456f-b6fa-10a642be6742"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.146670 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-kube-api-access-9n8dl" (OuterVolumeSpecName: "kube-api-access-9n8dl") pod "441d47f7-e5dd-456f-b6fa-10a642be6742" (UID: "441d47f7-e5dd-456f-b6fa-10a642be6742"). InnerVolumeSpecName "kube-api-access-9n8dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.161166 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-server-conf" (OuterVolumeSpecName: "server-conf") pod "441d47f7-e5dd-456f-b6fa-10a642be6742" (UID: "441d47f7-e5dd-456f-b6fa-10a642be6742"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.201482 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "441d47f7-e5dd-456f-b6fa-10a642be6742" (UID: "441d47f7-e5dd-456f-b6fa-10a642be6742"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.227821 4842 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.227848 4842 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.227860 4842 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.227870 4842 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/441d47f7-e5dd-456f-b6fa-10a642be6742-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.227879 4842 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/441d47f7-e5dd-456f-b6fa-10a642be6742-pod-info\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.227886 4842 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-server-conf\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.227894 4842 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.227903 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n8dl\" (UniqueName: \"kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-kube-api-access-9n8dl\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.227933 4842 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.227942 4842 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.227950 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.250890 4842 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.259333 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-659598d599-lpzh5" podUID="9eff2351-b4e8-43cf-a232-9c36cb11c130" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.170:8080/healthcheck\": dial tcp 10.217.0.170:8080: i/o timeout" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.259584 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-659598d599-lpzh5" podUID="9eff2351-b4e8-43cf-a232-9c36cb11c130" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.170:8080/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.329460 4842 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.782279 4842 generic.go:334] "Generic (PLEG): container finished" podID="cbda1f81-b862-4ee7-84ce-590c353e4d5b" containerID="75df0dcbbbe53a8b55947d6010ee6f966cc34b098ea07e3b90fcd36b98f46fc4" exitCode=0 Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.783405 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"cbda1f81-b862-4ee7-84ce-590c353e4d5b","Type":"ContainerDied","Data":"75df0dcbbbe53a8b55947d6010ee6f966cc34b098ea07e3b90fcd36b98f46fc4"} Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.791775 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zllm7" event={"ID":"02f0d774-dbe6-45d5-9ffa-64383c8be0d7","Type":"ContainerStarted","Data":"f2cb985a30fbcf047b72d30936225b42c521d9d6aa877867ab68fc50e1baca37"} Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.795541 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"441d47f7-e5dd-456f-b6fa-10a642be6742","Type":"ContainerDied","Data":"f125ead6f6ca269886544c12b159c6f5309a094d04f426e2da08b9aef5bc513c"} Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.795595 4842 scope.go:117] "RemoveContainer" containerID="3913ec835fcef00ab7ba5cfa0bb102b1d808857fbee96be0da99ede67f9672b5" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.795760 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.853422 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.855281 4842 scope.go:117] "RemoveContainer" containerID="15488c5f14bed733c354b136f5f9b0303d01f42120de21fa2a655d19a2d681ef" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.859912 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.179688 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.244026 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbda1f81-b862-4ee7-84ce-590c353e4d5b-config-data\") pod \"cbda1f81-b862-4ee7-84ce-590c353e4d5b\" (UID: \"cbda1f81-b862-4ee7-84ce-590c353e4d5b\") " Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.244152 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf5pj\" (UniqueName: \"kubernetes.io/projected/cbda1f81-b862-4ee7-84ce-590c353e4d5b-kube-api-access-zf5pj\") pod \"cbda1f81-b862-4ee7-84ce-590c353e4d5b\" (UID: \"cbda1f81-b862-4ee7-84ce-590c353e4d5b\") " Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.244253 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbda1f81-b862-4ee7-84ce-590c353e4d5b-combined-ca-bundle\") pod \"cbda1f81-b862-4ee7-84ce-590c353e4d5b\" (UID: \"cbda1f81-b862-4ee7-84ce-590c353e4d5b\") " Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.249012 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbda1f81-b862-4ee7-84ce-590c353e4d5b-kube-api-access-zf5pj" (OuterVolumeSpecName: "kube-api-access-zf5pj") pod "cbda1f81-b862-4ee7-84ce-590c353e4d5b" (UID: "cbda1f81-b862-4ee7-84ce-590c353e4d5b"). InnerVolumeSpecName "kube-api-access-zf5pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.264399 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbda1f81-b862-4ee7-84ce-590c353e4d5b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbda1f81-b862-4ee7-84ce-590c353e4d5b" (UID: "cbda1f81-b862-4ee7-84ce-590c353e4d5b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.266173 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbda1f81-b862-4ee7-84ce-590c353e4d5b-config-data" (OuterVolumeSpecName: "config-data") pod "cbda1f81-b862-4ee7-84ce-590c353e4d5b" (UID: "cbda1f81-b862-4ee7-84ce-590c353e4d5b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.345861 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf5pj\" (UniqueName: \"kubernetes.io/projected/cbda1f81-b862-4ee7-84ce-590c353e4d5b-kube-api-access-zf5pj\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.345899 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbda1f81-b862-4ee7-84ce-590c353e4d5b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.345909 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbda1f81-b862-4ee7-84ce-590c353e4d5b-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.441478 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b2ca532-dbbc-4148-8d2f-fc474685f0bd" path="/var/lib/kubelet/pods/2b2ca532-dbbc-4148-8d2f-fc474685f0bd/volumes" Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.442285 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="441d47f7-e5dd-456f-b6fa-10a642be6742" path="/var/lib/kubelet/pods/441d47f7-e5dd-456f-b6fa-10a642be6742/volumes" Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.443266 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7343dd67-a085-4da9-8d79-f25ea1e20ca6" path="/var/lib/kubelet/pods/7343dd67-a085-4da9-8d79-f25ea1e20ca6/volumes" Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.812982 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"cbda1f81-b862-4ee7-84ce-590c353e4d5b","Type":"ContainerDied","Data":"85e914a150668613743c13aeff477024d4b0461bd9157d8138fdfcfd7144ee67"} Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.813040 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.813049 4842 scope.go:117] "RemoveContainer" containerID="75df0dcbbbe53a8b55947d6010ee6f966cc34b098ea07e3b90fcd36b98f46fc4" Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.816172 4842 generic.go:334] "Generic (PLEG): container finished" podID="02f0d774-dbe6-45d5-9ffa-64383c8be0d7" containerID="f2cb985a30fbcf047b72d30936225b42c521d9d6aa877867ab68fc50e1baca37" exitCode=0 Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.816319 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zllm7" event={"ID":"02f0d774-dbe6-45d5-9ffa-64383c8be0d7","Type":"ContainerDied","Data":"f2cb985a30fbcf047b72d30936225b42c521d9d6aa877867ab68fc50e1baca37"} Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.876266 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.883112 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 07:09:26 crc kubenswrapper[4842]: I0202 07:09:26.620610 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5cc5c967fd-w6ljx" podUID="eb022115-b53a-4ed0-a2a0-b44644dc26a7" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.162:9311/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 07:09:26 crc kubenswrapper[4842]: I0202 07:09:26.620642 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5cc5c967fd-w6ljx" podUID="eb022115-b53a-4ed0-a2a0-b44644dc26a7" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.162:9311/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 07:09:26 crc kubenswrapper[4842]: I0202 07:09:26.828615 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zllm7" event={"ID":"02f0d774-dbe6-45d5-9ffa-64383c8be0d7","Type":"ContainerStarted","Data":"1fc31936ea8e9f9b875ebd7857ad04e6102b7866b0c1de09c58a29f7919b073f"} Feb 02 07:09:26 crc kubenswrapper[4842]: I0202 07:09:26.851714 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zllm7" podStartSLOduration=6.355325003 podStartE2EDuration="8.85168316s" podCreationTimestamp="2026-02-02 07:09:18 +0000 UTC" firstStartedPulling="2026-02-02 07:09:23.736595709 +0000 UTC m=+1389.113863621" lastFinishedPulling="2026-02-02 07:09:26.232953856 +0000 UTC m=+1391.610221778" observedRunningTime="2026-02-02 07:09:26.846968159 +0000 UTC m=+1392.224236111" watchObservedRunningTime="2026-02-02 07:09:26.85168316 +0000 UTC m=+1392.228951112" Feb 02 07:09:27 crc kubenswrapper[4842]: I0202 07:09:27.442042 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbda1f81-b862-4ee7-84ce-590c353e4d5b" path="/var/lib/kubelet/pods/cbda1f81-b862-4ee7-84ce-590c353e4d5b/volumes" Feb 02 07:09:28 crc kubenswrapper[4842]: E0202 07:09:28.796935 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:28 crc kubenswrapper[4842]: E0202 07:09:28.797337 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:28 crc kubenswrapper[4842]: E0202 07:09:28.797652 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:28 crc kubenswrapper[4842]: E0202 07:09:28.797706 4842 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-vctt8" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovsdb-server" Feb 02 07:09:28 crc kubenswrapper[4842]: E0202 07:09:28.801303 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:28 crc kubenswrapper[4842]: E0202 07:09:28.803144 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:28 crc kubenswrapper[4842]: E0202 07:09:28.805328 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:28 crc kubenswrapper[4842]: E0202 07:09:28.805363 4842 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-vctt8" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovs-vswitchd" Feb 02 07:09:29 crc kubenswrapper[4842]: I0202 07:09:29.750264 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:29 crc kubenswrapper[4842]: I0202 07:09:29.750321 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:30 crc kubenswrapper[4842]: I0202 07:09:30.816373 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zllm7" podUID="02f0d774-dbe6-45d5-9ffa-64383c8be0d7" containerName="registry-server" probeResult="failure" output=< Feb 02 07:09:30 crc kubenswrapper[4842]: timeout: failed to connect service ":50051" within 1s Feb 02 07:09:30 crc kubenswrapper[4842]: > Feb 02 07:09:33 crc kubenswrapper[4842]: E0202 07:09:33.798586 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:33 crc kubenswrapper[4842]: E0202 07:09:33.799197 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:33 crc kubenswrapper[4842]: E0202 07:09:33.799416 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:33 crc kubenswrapper[4842]: E0202 07:09:33.799541 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:33 crc kubenswrapper[4842]: E0202 07:09:33.799583 4842 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-vctt8" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovsdb-server" Feb 02 07:09:33 crc kubenswrapper[4842]: E0202 07:09:33.801347 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:33 crc kubenswrapper[4842]: E0202 07:09:33.802437 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:33 crc kubenswrapper[4842]: E0202 07:09:33.802476 4842 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-vctt8" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovs-vswitchd" Feb 02 07:09:38 crc kubenswrapper[4842]: E0202 07:09:38.800404 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:38 crc kubenswrapper[4842]: E0202 07:09:38.800729 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:38 crc kubenswrapper[4842]: E0202 07:09:38.802560 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:38 crc kubenswrapper[4842]: E0202 07:09:38.802920 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:38 crc kubenswrapper[4842]: E0202 07:09:38.802969 4842 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-vctt8" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovsdb-server" Feb 02 07:09:38 crc kubenswrapper[4842]: E0202 07:09:38.804316 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:38 crc kubenswrapper[4842]: E0202 07:09:38.808336 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:38 crc kubenswrapper[4842]: E0202 07:09:38.808394 4842 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-vctt8" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovs-vswitchd" Feb 02 07:09:38 crc kubenswrapper[4842]: I0202 07:09:38.953993 4842 generic.go:334] "Generic (PLEG): container finished" podID="953bf671-ca79-4208-9bab-672dc079dd82" containerID="679d0126323f1cafc695474001597b9d37c1a23ba5158a00e7f240fffa003eca" exitCode=0 Feb 02 07:09:38 crc kubenswrapper[4842]: I0202 07:09:38.954050 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6684555597-gjtgz" event={"ID":"953bf671-ca79-4208-9bab-672dc079dd82","Type":"ContainerDied","Data":"679d0126323f1cafc695474001597b9d37c1a23ba5158a00e7f240fffa003eca"} Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.154896 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.274013 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-internal-tls-certs\") pod \"953bf671-ca79-4208-9bab-672dc079dd82\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.274087 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj647\" (UniqueName: \"kubernetes.io/projected/953bf671-ca79-4208-9bab-672dc079dd82-kube-api-access-wj647\") pod \"953bf671-ca79-4208-9bab-672dc079dd82\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.274120 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-public-tls-certs\") pod \"953bf671-ca79-4208-9bab-672dc079dd82\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.274180 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-httpd-config\") pod \"953bf671-ca79-4208-9bab-672dc079dd82\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.274287 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-config\") pod \"953bf671-ca79-4208-9bab-672dc079dd82\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.274323 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-combined-ca-bundle\") pod \"953bf671-ca79-4208-9bab-672dc079dd82\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.274350 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-ovndb-tls-certs\") pod \"953bf671-ca79-4208-9bab-672dc079dd82\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.281373 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "953bf671-ca79-4208-9bab-672dc079dd82" (UID: "953bf671-ca79-4208-9bab-672dc079dd82"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.281766 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/953bf671-ca79-4208-9bab-672dc079dd82-kube-api-access-wj647" (OuterVolumeSpecName: "kube-api-access-wj647") pod "953bf671-ca79-4208-9bab-672dc079dd82" (UID: "953bf671-ca79-4208-9bab-672dc079dd82"). InnerVolumeSpecName "kube-api-access-wj647". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.314850 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "953bf671-ca79-4208-9bab-672dc079dd82" (UID: "953bf671-ca79-4208-9bab-672dc079dd82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.337798 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "953bf671-ca79-4208-9bab-672dc079dd82" (UID: "953bf671-ca79-4208-9bab-672dc079dd82"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.339880 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "953bf671-ca79-4208-9bab-672dc079dd82" (UID: "953bf671-ca79-4208-9bab-672dc079dd82"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.344344 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-config" (OuterVolumeSpecName: "config") pod "953bf671-ca79-4208-9bab-672dc079dd82" (UID: "953bf671-ca79-4208-9bab-672dc079dd82"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.370921 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "953bf671-ca79-4208-9bab-672dc079dd82" (UID: "953bf671-ca79-4208-9bab-672dc079dd82"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.375951 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.375984 4842 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.375994 4842 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.376004 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj647\" (UniqueName: \"kubernetes.io/projected/953bf671-ca79-4208-9bab-672dc079dd82-kube-api-access-wj647\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.376015 4842 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.376024 4842 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.376032 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.813321 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.964846 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6684555597-gjtgz" event={"ID":"953bf671-ca79-4208-9bab-672dc079dd82","Type":"ContainerDied","Data":"642e7ab1c818fa3e0857124b890ed7f6355271588ac21bdb99c64d978b7374b0"} Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.964896 4842 scope.go:117] "RemoveContainer" containerID="69048ee01a49fa4ed888b0c135134e06af01f907b56780330edbc72e09136e83" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.964928 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.966733 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.996177 4842 scope.go:117] "RemoveContainer" containerID="679d0126323f1cafc695474001597b9d37c1a23ba5158a00e7f240fffa003eca" Feb 02 07:09:40 crc kubenswrapper[4842]: I0202 07:09:40.020609 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6684555597-gjtgz"] Feb 02 07:09:40 crc kubenswrapper[4842]: I0202 07:09:40.030534 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6684555597-gjtgz"] Feb 02 07:09:40 crc kubenswrapper[4842]: I0202 07:09:40.058104 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zllm7"] Feb 02 07:09:40 crc kubenswrapper[4842]: I0202 07:09:40.978326 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zllm7" podUID="02f0d774-dbe6-45d5-9ffa-64383c8be0d7" containerName="registry-server" containerID="cri-o://1fc31936ea8e9f9b875ebd7857ad04e6102b7866b0c1de09c58a29f7919b073f" gracePeriod=2 Feb 02 07:09:41 crc kubenswrapper[4842]: I0202 07:09:41.451839 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="953bf671-ca79-4208-9bab-672dc079dd82" path="/var/lib/kubelet/pods/953bf671-ca79-4208-9bab-672dc079dd82/volumes" Feb 02 07:09:41 crc kubenswrapper[4842]: I0202 07:09:41.559546 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:41 crc kubenswrapper[4842]: I0202 07:09:41.741486 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-utilities\") pod \"02f0d774-dbe6-45d5-9ffa-64383c8be0d7\" (UID: \"02f0d774-dbe6-45d5-9ffa-64383c8be0d7\") " Feb 02 07:09:41 crc kubenswrapper[4842]: I0202 07:09:41.741848 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f45s8\" (UniqueName: \"kubernetes.io/projected/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-kube-api-access-f45s8\") pod \"02f0d774-dbe6-45d5-9ffa-64383c8be0d7\" (UID: \"02f0d774-dbe6-45d5-9ffa-64383c8be0d7\") " Feb 02 07:09:41 crc kubenswrapper[4842]: I0202 07:09:41.741969 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-catalog-content\") pod \"02f0d774-dbe6-45d5-9ffa-64383c8be0d7\" (UID: \"02f0d774-dbe6-45d5-9ffa-64383c8be0d7\") " Feb 02 07:09:41 crc kubenswrapper[4842]: I0202 07:09:41.742488 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-utilities" (OuterVolumeSpecName: "utilities") pod "02f0d774-dbe6-45d5-9ffa-64383c8be0d7" (UID: "02f0d774-dbe6-45d5-9ffa-64383c8be0d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:41 crc kubenswrapper[4842]: I0202 07:09:41.747833 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-kube-api-access-f45s8" (OuterVolumeSpecName: "kube-api-access-f45s8") pod "02f0d774-dbe6-45d5-9ffa-64383c8be0d7" (UID: "02f0d774-dbe6-45d5-9ffa-64383c8be0d7"). InnerVolumeSpecName "kube-api-access-f45s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:41 crc kubenswrapper[4842]: I0202 07:09:41.843899 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:41 crc kubenswrapper[4842]: I0202 07:09:41.843947 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f45s8\" (UniqueName: \"kubernetes.io/projected/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-kube-api-access-f45s8\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:41 crc kubenswrapper[4842]: I0202 07:09:41.924746 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02f0d774-dbe6-45d5-9ffa-64383c8be0d7" (UID: "02f0d774-dbe6-45d5-9ffa-64383c8be0d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:41 crc kubenswrapper[4842]: I0202 07:09:41.945788 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:41 crc kubenswrapper[4842]: I0202 07:09:41.994070 4842 generic.go:334] "Generic (PLEG): container finished" podID="02f0d774-dbe6-45d5-9ffa-64383c8be0d7" containerID="1fc31936ea8e9f9b875ebd7857ad04e6102b7866b0c1de09c58a29f7919b073f" exitCode=0 Feb 02 07:09:41 crc kubenswrapper[4842]: I0202 07:09:41.994148 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zllm7" event={"ID":"02f0d774-dbe6-45d5-9ffa-64383c8be0d7","Type":"ContainerDied","Data":"1fc31936ea8e9f9b875ebd7857ad04e6102b7866b0c1de09c58a29f7919b073f"} Feb 02 07:09:41 crc kubenswrapper[4842]: I0202 07:09:41.994200 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zllm7" event={"ID":"02f0d774-dbe6-45d5-9ffa-64383c8be0d7","Type":"ContainerDied","Data":"2cbf9ae96d96235341d31a68b4251a05222974fd5545b2aa050455da09a3394e"} Feb 02 07:09:41 crc kubenswrapper[4842]: I0202 07:09:41.994269 4842 scope.go:117] "RemoveContainer" containerID="1fc31936ea8e9f9b875ebd7857ad04e6102b7866b0c1de09c58a29f7919b073f" Feb 02 07:09:41 crc kubenswrapper[4842]: I0202 07:09:41.994360 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:42 crc kubenswrapper[4842]: I0202 07:09:42.049730 4842 scope.go:117] "RemoveContainer" containerID="f2cb985a30fbcf047b72d30936225b42c521d9d6aa877867ab68fc50e1baca37" Feb 02 07:09:42 crc kubenswrapper[4842]: I0202 07:09:42.050433 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zllm7"] Feb 02 07:09:42 crc kubenswrapper[4842]: I0202 07:09:42.058973 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zllm7"] Feb 02 07:09:42 crc kubenswrapper[4842]: I0202 07:09:42.079120 4842 scope.go:117] "RemoveContainer" containerID="b6fbbeefaf6c662fb9dc489fefb6fc893e73cc0665f964e826ce195432515869" Feb 02 07:09:42 crc kubenswrapper[4842]: I0202 07:09:42.104478 4842 scope.go:117] "RemoveContainer" containerID="1fc31936ea8e9f9b875ebd7857ad04e6102b7866b0c1de09c58a29f7919b073f" Feb 02 07:09:42 crc kubenswrapper[4842]: E0202 07:09:42.105117 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fc31936ea8e9f9b875ebd7857ad04e6102b7866b0c1de09c58a29f7919b073f\": container with ID starting with 1fc31936ea8e9f9b875ebd7857ad04e6102b7866b0c1de09c58a29f7919b073f not found: ID does not exist" containerID="1fc31936ea8e9f9b875ebd7857ad04e6102b7866b0c1de09c58a29f7919b073f" Feb 02 07:09:42 crc kubenswrapper[4842]: I0202 07:09:42.105154 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fc31936ea8e9f9b875ebd7857ad04e6102b7866b0c1de09c58a29f7919b073f"} err="failed to get container status \"1fc31936ea8e9f9b875ebd7857ad04e6102b7866b0c1de09c58a29f7919b073f\": rpc error: code = NotFound desc = could not find container \"1fc31936ea8e9f9b875ebd7857ad04e6102b7866b0c1de09c58a29f7919b073f\": container with ID starting with 1fc31936ea8e9f9b875ebd7857ad04e6102b7866b0c1de09c58a29f7919b073f not found: ID does not exist" Feb 02 07:09:42 crc kubenswrapper[4842]: I0202 07:09:42.105184 4842 scope.go:117] "RemoveContainer" containerID="f2cb985a30fbcf047b72d30936225b42c521d9d6aa877867ab68fc50e1baca37" Feb 02 07:09:42 crc kubenswrapper[4842]: E0202 07:09:42.105801 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2cb985a30fbcf047b72d30936225b42c521d9d6aa877867ab68fc50e1baca37\": container with ID starting with f2cb985a30fbcf047b72d30936225b42c521d9d6aa877867ab68fc50e1baca37 not found: ID does not exist" containerID="f2cb985a30fbcf047b72d30936225b42c521d9d6aa877867ab68fc50e1baca37" Feb 02 07:09:42 crc kubenswrapper[4842]: I0202 07:09:42.105877 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2cb985a30fbcf047b72d30936225b42c521d9d6aa877867ab68fc50e1baca37"} err="failed to get container status \"f2cb985a30fbcf047b72d30936225b42c521d9d6aa877867ab68fc50e1baca37\": rpc error: code = NotFound desc = could not find container \"f2cb985a30fbcf047b72d30936225b42c521d9d6aa877867ab68fc50e1baca37\": container with ID starting with f2cb985a30fbcf047b72d30936225b42c521d9d6aa877867ab68fc50e1baca37 not found: ID does not exist" Feb 02 07:09:42 crc kubenswrapper[4842]: I0202 07:09:42.105913 4842 scope.go:117] "RemoveContainer" containerID="b6fbbeefaf6c662fb9dc489fefb6fc893e73cc0665f964e826ce195432515869" Feb 02 07:09:42 crc kubenswrapper[4842]: E0202 07:09:42.106414 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6fbbeefaf6c662fb9dc489fefb6fc893e73cc0665f964e826ce195432515869\": container with ID starting with b6fbbeefaf6c662fb9dc489fefb6fc893e73cc0665f964e826ce195432515869 not found: ID does not exist" containerID="b6fbbeefaf6c662fb9dc489fefb6fc893e73cc0665f964e826ce195432515869" Feb 02 07:09:42 crc kubenswrapper[4842]: I0202 07:09:42.106451 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6fbbeefaf6c662fb9dc489fefb6fc893e73cc0665f964e826ce195432515869"} err="failed to get container status \"b6fbbeefaf6c662fb9dc489fefb6fc893e73cc0665f964e826ce195432515869\": rpc error: code = NotFound desc = could not find container \"b6fbbeefaf6c662fb9dc489fefb6fc893e73cc0665f964e826ce195432515869\": container with ID starting with b6fbbeefaf6c662fb9dc489fefb6fc893e73cc0665f964e826ce195432515869 not found: ID does not exist" Feb 02 07:09:42 crc kubenswrapper[4842]: I0202 07:09:42.145764 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:09:42 crc kubenswrapper[4842]: I0202 07:09:42.145826 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:09:43 crc kubenswrapper[4842]: I0202 07:09:43.449538 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02f0d774-dbe6-45d5-9ffa-64383c8be0d7" path="/var/lib/kubelet/pods/02f0d774-dbe6-45d5-9ffa-64383c8be0d7/volumes" Feb 02 07:09:43 crc kubenswrapper[4842]: E0202 07:09:43.797528 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:43 crc kubenswrapper[4842]: E0202 07:09:43.798247 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:43 crc kubenswrapper[4842]: E0202 07:09:43.798711 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:43 crc kubenswrapper[4842]: E0202 07:09:43.798789 4842 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-vctt8" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovsdb-server" Feb 02 07:09:43 crc kubenswrapper[4842]: E0202 07:09:43.799581 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:43 crc kubenswrapper[4842]: E0202 07:09:43.801457 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:43 crc kubenswrapper[4842]: E0202 07:09:43.803453 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:43 crc kubenswrapper[4842]: E0202 07:09:43.803526 4842 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-vctt8" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovs-vswitchd" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.057356 4842 generic.go:334] "Generic (PLEG): container finished" podID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerID="a0ba4c6bbf6b05d401f52ab663d9f47cbde0cebb5dfcb8997ff120cffdd05060" exitCode=137 Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.057454 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerDied","Data":"a0ba4c6bbf6b05d401f52ab663d9f47cbde0cebb5dfcb8997ff120cffdd05060"} Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.061321 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vctt8_ce6d1a00-c27b-418e-afa9-01c8c7802127/ovs-vswitchd/0.log" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.062829 4842 generic.go:334] "Generic (PLEG): container finished" podID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" exitCode=137 Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.062871 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vctt8" event={"ID":"ce6d1a00-c27b-418e-afa9-01c8c7802127","Type":"ContainerDied","Data":"3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e"} Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.213083 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.332192 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/928a8c7e-d835-4795-8197-1861e4fd8f83-cache\") pod \"928a8c7e-d835-4795-8197-1861e4fd8f83\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.332300 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"928a8c7e-d835-4795-8197-1861e4fd8f83\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.332328 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/928a8c7e-d835-4795-8197-1861e4fd8f83-lock\") pod \"928a8c7e-d835-4795-8197-1861e4fd8f83\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.332391 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928a8c7e-d835-4795-8197-1861e4fd8f83-combined-ca-bundle\") pod \"928a8c7e-d835-4795-8197-1861e4fd8f83\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.332424 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9t87\" (UniqueName: \"kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-kube-api-access-t9t87\") pod \"928a8c7e-d835-4795-8197-1861e4fd8f83\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.332441 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift\") pod \"928a8c7e-d835-4795-8197-1861e4fd8f83\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.332963 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/928a8c7e-d835-4795-8197-1861e4fd8f83-lock" (OuterVolumeSpecName: "lock") pod "928a8c7e-d835-4795-8197-1861e4fd8f83" (UID: "928a8c7e-d835-4795-8197-1861e4fd8f83"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.333137 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/928a8c7e-d835-4795-8197-1861e4fd8f83-cache" (OuterVolumeSpecName: "cache") pod "928a8c7e-d835-4795-8197-1861e4fd8f83" (UID: "928a8c7e-d835-4795-8197-1861e4fd8f83"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.337429 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "swift") pod "928a8c7e-d835-4795-8197-1861e4fd8f83" (UID: "928a8c7e-d835-4795-8197-1861e4fd8f83"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.337992 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "928a8c7e-d835-4795-8197-1861e4fd8f83" (UID: "928a8c7e-d835-4795-8197-1861e4fd8f83"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.339043 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-kube-api-access-t9t87" (OuterVolumeSpecName: "kube-api-access-t9t87") pod "928a8c7e-d835-4795-8197-1861e4fd8f83" (UID: "928a8c7e-d835-4795-8197-1861e4fd8f83"). InnerVolumeSpecName "kube-api-access-t9t87". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.342574 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vctt8_ce6d1a00-c27b-418e-afa9-01c8c7802127/ovs-vswitchd/0.log" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.343600 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433113 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-log\") pod \"ce6d1a00-c27b-418e-afa9-01c8c7802127\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433155 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-run\") pod \"ce6d1a00-c27b-418e-afa9-01c8c7802127\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433240 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-log" (OuterVolumeSpecName: "var-log") pod "ce6d1a00-c27b-418e-afa9-01c8c7802127" (UID: "ce6d1a00-c27b-418e-afa9-01c8c7802127"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433284 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lfhd\" (UniqueName: \"kubernetes.io/projected/ce6d1a00-c27b-418e-afa9-01c8c7802127-kube-api-access-6lfhd\") pod \"ce6d1a00-c27b-418e-afa9-01c8c7802127\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433290 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-run" (OuterVolumeSpecName: "var-run") pod "ce6d1a00-c27b-418e-afa9-01c8c7802127" (UID: "ce6d1a00-c27b-418e-afa9-01c8c7802127"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433330 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-lib\") pod \"ce6d1a00-c27b-418e-afa9-01c8c7802127\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433407 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-etc-ovs\") pod \"ce6d1a00-c27b-418e-afa9-01c8c7802127\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433440 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce6d1a00-c27b-418e-afa9-01c8c7802127-scripts\") pod \"ce6d1a00-c27b-418e-afa9-01c8c7802127\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433481 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-lib" (OuterVolumeSpecName: "var-lib") pod "ce6d1a00-c27b-418e-afa9-01c8c7802127" (UID: "ce6d1a00-c27b-418e-afa9-01c8c7802127"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433509 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "ce6d1a00-c27b-418e-afa9-01c8c7802127" (UID: "ce6d1a00-c27b-418e-afa9-01c8c7802127"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433736 4842 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-lib\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433763 4842 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433776 4842 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/928a8c7e-d835-4795-8197-1861e4fd8f83-lock\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433788 4842 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-etc-ovs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433799 4842 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-log\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433811 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9t87\" (UniqueName: \"kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-kube-api-access-t9t87\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433823 4842 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-run\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433834 4842 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433845 4842 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/928a8c7e-d835-4795-8197-1861e4fd8f83-cache\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.434576 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce6d1a00-c27b-418e-afa9-01c8c7802127-scripts" (OuterVolumeSpecName: "scripts") pod "ce6d1a00-c27b-418e-afa9-01c8c7802127" (UID: "ce6d1a00-c27b-418e-afa9-01c8c7802127"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.436119 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce6d1a00-c27b-418e-afa9-01c8c7802127-kube-api-access-6lfhd" (OuterVolumeSpecName: "kube-api-access-6lfhd") pod "ce6d1a00-c27b-418e-afa9-01c8c7802127" (UID: "ce6d1a00-c27b-418e-afa9-01c8c7802127"). InnerVolumeSpecName "kube-api-access-6lfhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.452527 4842 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.535147 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lfhd\" (UniqueName: \"kubernetes.io/projected/ce6d1a00-c27b-418e-afa9-01c8c7802127-kube-api-access-6lfhd\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.535190 4842 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.535211 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce6d1a00-c27b-418e-afa9-01c8c7802127-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.631584 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/928a8c7e-d835-4795-8197-1861e4fd8f83-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "928a8c7e-d835-4795-8197-1861e4fd8f83" (UID: "928a8c7e-d835-4795-8197-1861e4fd8f83"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.636344 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928a8c7e-d835-4795-8197-1861e4fd8f83-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.073600 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vctt8_ce6d1a00-c27b-418e-afa9-01c8c7802127/ovs-vswitchd/0.log" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.075156 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vctt8" event={"ID":"ce6d1a00-c27b-418e-afa9-01c8c7802127","Type":"ContainerDied","Data":"20790a3e9ff5cd63d4fa516d28e246cafad534d4d8104c6a1f16eb5a3c586904"} Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.075204 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.075265 4842 scope.go:117] "RemoveContainer" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.091589 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerDied","Data":"ab889a1e60a176a5157cbf2492af02320a93e4b8f19cc77b84445a221a0d1b90"} Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.091711 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.096652 4842 scope.go:117] "RemoveContainer" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.100149 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-vctt8"] Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.116409 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-vctt8"] Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.138244 4842 scope.go:117] "RemoveContainer" containerID="0e2b21c37cc6f772bef7c4e80d3e6f156ca0d9772f52dfdc03a69fbc57f8dd8b" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.148865 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.154405 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.163576 4842 scope.go:117] "RemoveContainer" containerID="a0ba4c6bbf6b05d401f52ab663d9f47cbde0cebb5dfcb8997ff120cffdd05060" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.188651 4842 scope.go:117] "RemoveContainer" containerID="419e27de3686d1a75400d18f391cbe54519868631357cce324a86c057a1dbbfe" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.204860 4842 scope.go:117] "RemoveContainer" containerID="c3ceba27f85cf9e18b4c96e9c35e3e830a3840e245ff37876679745418c599df" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.219498 4842 scope.go:117] "RemoveContainer" containerID="11c87109b1d73f0312d44a7a194b500b7f7e551073a65468bc291891955fd1d1" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.234767 4842 scope.go:117] "RemoveContainer" containerID="3accf74226bf0263e16fdcc906f97a58d41768cb604252689a8c7a9fac50f04f" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.252936 4842 scope.go:117] "RemoveContainer" containerID="a6f0be0e71192334da01f394f7e0075f3ff472a60d737f40449f0c7c56b45801" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.276304 4842 scope.go:117] "RemoveContainer" containerID="5fe6ac9847ee5629c3a3a2ccb929b05946534e86d95fae65cd97cbab654c7391" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.292967 4842 scope.go:117] "RemoveContainer" containerID="94a480917554fbdc9c94fdc240db04a25556fac19911eb5945a6838a7169e5f3" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.319100 4842 scope.go:117] "RemoveContainer" containerID="98d05e29848a090df093dcb34910845ebd22086e918c4b510210550b0fcd98f9" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.337502 4842 scope.go:117] "RemoveContainer" containerID="84a64916ad5a870dd2730290e371bd4ee7a327af7bfa716ae7b3457657e3b792" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.357098 4842 scope.go:117] "RemoveContainer" containerID="78ea2470e0bb66602235ee6f953b1cb50c60bbf2dda3d60aa9ded3436730161c" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.378722 4842 scope.go:117] "RemoveContainer" containerID="1864c37f5464bef32be4591740d73c6be777716e778338b57e2c23f30b098973" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.398830 4842 scope.go:117] "RemoveContainer" containerID="81e3b07657ef3f1d8e0c81f783b14b3167b42779f998c664f2c184857a6ffc8b" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.453130 4842 scope.go:117] "RemoveContainer" containerID="0579b6675bbca573212a34273ea354bc485d0dead5d30e277230eaf0ce0b9594" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.480816 4842 scope.go:117] "RemoveContainer" containerID="496f7c8f3a8e1190f069f9d123dad4f03c5ddc2c339a3a530d938ce75113f766" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.039974 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.107261 4842 generic.go:334] "Generic (PLEG): container finished" podID="f3d6691d-0283-4dd7-966d-ceba8bde7895" containerID="dac9b206e4e1335054c8c15fe13fa2bcf140fe9dec688f671a0584f1e29286b6" exitCode=137 Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.107386 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cf958d9d9-vvzkc" event={"ID":"f3d6691d-0283-4dd7-966d-ceba8bde7895","Type":"ContainerDied","Data":"dac9b206e4e1335054c8c15fe13fa2bcf140fe9dec688f671a0584f1e29286b6"} Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.109031 4842 generic.go:334] "Generic (PLEG): container finished" podID="748756c2-ee60-42ce-835e-bfaa7007d7ac" containerID="b52b688787922560d30dfe4b0b956a05a57d07b8c6d9016ccf7d37fd8f711081" exitCode=137 Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.109077 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" event={"ID":"748756c2-ee60-42ce-835e-bfaa7007d7ac","Type":"ContainerDied","Data":"b52b688787922560d30dfe4b0b956a05a57d07b8c6d9016ccf7d37fd8f711081"} Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.109097 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" event={"ID":"748756c2-ee60-42ce-835e-bfaa7007d7ac","Type":"ContainerDied","Data":"09ed8d05d994b4f10b7eef605b2f606beee05a7896873233e85ba84f7bd5475e"} Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.109116 4842 scope.go:117] "RemoveContainer" containerID="b52b688787922560d30dfe4b0b956a05a57d07b8c6d9016ccf7d37fd8f711081" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.109235 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.135863 4842 scope.go:117] "RemoveContainer" containerID="c802fa3028f8b2c2c2cefe528fbbb11245e3ea35edbed19c7f9407c4edba1398" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.150748 4842 scope.go:117] "RemoveContainer" containerID="b52b688787922560d30dfe4b0b956a05a57d07b8c6d9016ccf7d37fd8f711081" Feb 02 07:09:49 crc kubenswrapper[4842]: E0202 07:09:49.151441 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b52b688787922560d30dfe4b0b956a05a57d07b8c6d9016ccf7d37fd8f711081\": container with ID starting with b52b688787922560d30dfe4b0b956a05a57d07b8c6d9016ccf7d37fd8f711081 not found: ID does not exist" containerID="b52b688787922560d30dfe4b0b956a05a57d07b8c6d9016ccf7d37fd8f711081" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.151472 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b52b688787922560d30dfe4b0b956a05a57d07b8c6d9016ccf7d37fd8f711081"} err="failed to get container status \"b52b688787922560d30dfe4b0b956a05a57d07b8c6d9016ccf7d37fd8f711081\": rpc error: code = NotFound desc = could not find container \"b52b688787922560d30dfe4b0b956a05a57d07b8c6d9016ccf7d37fd8f711081\": container with ID starting with b52b688787922560d30dfe4b0b956a05a57d07b8c6d9016ccf7d37fd8f711081 not found: ID does not exist" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.151492 4842 scope.go:117] "RemoveContainer" containerID="c802fa3028f8b2c2c2cefe528fbbb11245e3ea35edbed19c7f9407c4edba1398" Feb 02 07:09:49 crc kubenswrapper[4842]: E0202 07:09:49.151820 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c802fa3028f8b2c2c2cefe528fbbb11245e3ea35edbed19c7f9407c4edba1398\": container with ID starting with c802fa3028f8b2c2c2cefe528fbbb11245e3ea35edbed19c7f9407c4edba1398 not found: ID does not exist" containerID="c802fa3028f8b2c2c2cefe528fbbb11245e3ea35edbed19c7f9407c4edba1398" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.151864 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c802fa3028f8b2c2c2cefe528fbbb11245e3ea35edbed19c7f9407c4edba1398"} err="failed to get container status \"c802fa3028f8b2c2c2cefe528fbbb11245e3ea35edbed19c7f9407c4edba1398\": rpc error: code = NotFound desc = could not find container \"c802fa3028f8b2c2c2cefe528fbbb11245e3ea35edbed19c7f9407c4edba1398\": container with ID starting with c802fa3028f8b2c2c2cefe528fbbb11245e3ea35edbed19c7f9407c4edba1398 not found: ID does not exist" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.157417 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-combined-ca-bundle\") pod \"748756c2-ee60-42ce-835e-bfaa7007d7ac\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.157536 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-config-data\") pod \"748756c2-ee60-42ce-835e-bfaa7007d7ac\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.157586 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/748756c2-ee60-42ce-835e-bfaa7007d7ac-logs\") pod \"748756c2-ee60-42ce-835e-bfaa7007d7ac\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.157617 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkhbb\" (UniqueName: \"kubernetes.io/projected/748756c2-ee60-42ce-835e-bfaa7007d7ac-kube-api-access-kkhbb\") pod \"748756c2-ee60-42ce-835e-bfaa7007d7ac\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.157640 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-config-data-custom\") pod \"748756c2-ee60-42ce-835e-bfaa7007d7ac\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.158372 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/748756c2-ee60-42ce-835e-bfaa7007d7ac-logs" (OuterVolumeSpecName: "logs") pod "748756c2-ee60-42ce-835e-bfaa7007d7ac" (UID: "748756c2-ee60-42ce-835e-bfaa7007d7ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.162400 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/748756c2-ee60-42ce-835e-bfaa7007d7ac-kube-api-access-kkhbb" (OuterVolumeSpecName: "kube-api-access-kkhbb") pod "748756c2-ee60-42ce-835e-bfaa7007d7ac" (UID: "748756c2-ee60-42ce-835e-bfaa7007d7ac"). InnerVolumeSpecName "kube-api-access-kkhbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.176248 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "748756c2-ee60-42ce-835e-bfaa7007d7ac" (UID: "748756c2-ee60-42ce-835e-bfaa7007d7ac"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.187955 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "748756c2-ee60-42ce-835e-bfaa7007d7ac" (UID: "748756c2-ee60-42ce-835e-bfaa7007d7ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.209102 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-config-data" (OuterVolumeSpecName: "config-data") pod "748756c2-ee60-42ce-835e-bfaa7007d7ac" (UID: "748756c2-ee60-42ce-835e-bfaa7007d7ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.229610 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.265206 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.265248 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/748756c2-ee60-42ce-835e-bfaa7007d7ac-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.265258 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkhbb\" (UniqueName: \"kubernetes.io/projected/748756c2-ee60-42ce-835e-bfaa7007d7ac-kube-api-access-kkhbb\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.265268 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.265276 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.365737 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-combined-ca-bundle\") pod \"f3d6691d-0283-4dd7-966d-ceba8bde7895\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.365820 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdbkt\" (UniqueName: \"kubernetes.io/projected/f3d6691d-0283-4dd7-966d-ceba8bde7895-kube-api-access-xdbkt\") pod \"f3d6691d-0283-4dd7-966d-ceba8bde7895\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.365886 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-config-data-custom\") pod \"f3d6691d-0283-4dd7-966d-ceba8bde7895\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.365981 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3d6691d-0283-4dd7-966d-ceba8bde7895-logs\") pod \"f3d6691d-0283-4dd7-966d-ceba8bde7895\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.366060 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-config-data\") pod \"f3d6691d-0283-4dd7-966d-ceba8bde7895\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.366673 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3d6691d-0283-4dd7-966d-ceba8bde7895-logs" (OuterVolumeSpecName: "logs") pod "f3d6691d-0283-4dd7-966d-ceba8bde7895" (UID: "f3d6691d-0283-4dd7-966d-ceba8bde7895"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.369124 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3d6691d-0283-4dd7-966d-ceba8bde7895-kube-api-access-xdbkt" (OuterVolumeSpecName: "kube-api-access-xdbkt") pod "f3d6691d-0283-4dd7-966d-ceba8bde7895" (UID: "f3d6691d-0283-4dd7-966d-ceba8bde7895"). InnerVolumeSpecName "kube-api-access-xdbkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.370062 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f3d6691d-0283-4dd7-966d-ceba8bde7895" (UID: "f3d6691d-0283-4dd7-966d-ceba8bde7895"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.394437 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3d6691d-0283-4dd7-966d-ceba8bde7895" (UID: "f3d6691d-0283-4dd7-966d-ceba8bde7895"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.415749 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-config-data" (OuterVolumeSpecName: "config-data") pod "f3d6691d-0283-4dd7-966d-ceba8bde7895" (UID: "f3d6691d-0283-4dd7-966d-ceba8bde7895"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.461211 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" path="/var/lib/kubelet/pods/928a8c7e-d835-4795-8197-1861e4fd8f83/volumes" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.465067 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" path="/var/lib/kubelet/pods/ce6d1a00-c27b-418e-afa9-01c8c7802127/volumes" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.466192 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-687b99dfd8-skrq6"] Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.466312 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-687b99dfd8-skrq6"] Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.467079 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.467111 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.467127 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdbkt\" (UniqueName: \"kubernetes.io/projected/f3d6691d-0283-4dd7-966d-ceba8bde7895-kube-api-access-xdbkt\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.467143 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.467158 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3d6691d-0283-4dd7-966d-ceba8bde7895-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:50 crc kubenswrapper[4842]: I0202 07:09:50.129016 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cf958d9d9-vvzkc" event={"ID":"f3d6691d-0283-4dd7-966d-ceba8bde7895","Type":"ContainerDied","Data":"d69c45eb45e674be84418f12982b88cbb7cb13f89d733e29e26157326878116c"} Feb 02 07:09:50 crc kubenswrapper[4842]: I0202 07:09:50.129034 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:50 crc kubenswrapper[4842]: I0202 07:09:50.129585 4842 scope.go:117] "RemoveContainer" containerID="dac9b206e4e1335054c8c15fe13fa2bcf140fe9dec688f671a0584f1e29286b6" Feb 02 07:09:50 crc kubenswrapper[4842]: I0202 07:09:50.159624 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5cf958d9d9-vvzkc"] Feb 02 07:09:50 crc kubenswrapper[4842]: I0202 07:09:50.169045 4842 scope.go:117] "RemoveContainer" containerID="04882b818d128bc118fdd65d9db4d076517b460bcb504e4f555e0244313167cc" Feb 02 07:09:50 crc kubenswrapper[4842]: I0202 07:09:50.169347 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-5cf958d9d9-vvzkc"] Feb 02 07:09:51 crc kubenswrapper[4842]: I0202 07:09:51.452248 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="748756c2-ee60-42ce-835e-bfaa7007d7ac" path="/var/lib/kubelet/pods/748756c2-ee60-42ce-835e-bfaa7007d7ac/volumes" Feb 02 07:09:51 crc kubenswrapper[4842]: I0202 07:09:51.453771 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3d6691d-0283-4dd7-966d-ceba8bde7895" path="/var/lib/kubelet/pods/f3d6691d-0283-4dd7-966d-ceba8bde7895/volumes" Feb 02 07:10:12 crc kubenswrapper[4842]: I0202 07:10:12.146475 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:10:12 crc kubenswrapper[4842]: I0202 07:10:12.147099 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:10:42 crc kubenswrapper[4842]: I0202 07:10:42.146827 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:10:42 crc kubenswrapper[4842]: I0202 07:10:42.149167 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:10:42 crc kubenswrapper[4842]: I0202 07:10:42.149432 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 07:10:42 crc kubenswrapper[4842]: I0202 07:10:42.150673 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 07:10:42 crc kubenswrapper[4842]: I0202 07:10:42.151020 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" gracePeriod=600 Feb 02 07:10:42 crc kubenswrapper[4842]: E0202 07:10:42.283497 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:10:42 crc kubenswrapper[4842]: I0202 07:10:42.731497 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" exitCode=0 Feb 02 07:10:42 crc kubenswrapper[4842]: I0202 07:10:42.731587 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87"} Feb 02 07:10:42 crc kubenswrapper[4842]: I0202 07:10:42.732189 4842 scope.go:117] "RemoveContainer" containerID="edc46ebafd92ce96bdf7451703c0e2c7fef67799fb2195e0085383b856862c49" Feb 02 07:10:42 crc kubenswrapper[4842]: I0202 07:10:42.733559 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:10:42 crc kubenswrapper[4842]: E0202 07:10:42.735448 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:10:57 crc kubenswrapper[4842]: I0202 07:10:57.433028 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:10:57 crc kubenswrapper[4842]: E0202 07:10:57.434157 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:11:08 crc kubenswrapper[4842]: I0202 07:11:08.433910 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:11:08 crc kubenswrapper[4842]: E0202 07:11:08.435028 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:11:18 crc kubenswrapper[4842]: I0202 07:11:18.484054 4842 scope.go:117] "RemoveContainer" containerID="59526756b474c2762ebc0f7a6578c91c40cc272db00fa72f3384382706ed53e2" Feb 02 07:11:18 crc kubenswrapper[4842]: I0202 07:11:18.532197 4842 scope.go:117] "RemoveContainer" containerID="185ab6e958e5fc2a5da9e833e3789438b8d16f440f7c53e0467e8ff307a5f7c8" Feb 02 07:11:18 crc kubenswrapper[4842]: I0202 07:11:18.579858 4842 scope.go:117] "RemoveContainer" containerID="f28dfbf8c174cb46df97e4d7d6b844e785a2d8671506e1ebb71b67017e08a6b8" Feb 02 07:11:18 crc kubenswrapper[4842]: I0202 07:11:18.641141 4842 scope.go:117] "RemoveContainer" containerID="1f6dfdf20fb08a168081a064432d989dfc5b7013b8511778f8a6195c000accc0" Feb 02 07:11:18 crc kubenswrapper[4842]: I0202 07:11:18.667786 4842 scope.go:117] "RemoveContainer" containerID="326e1290c30749283ca2bf9608aa395736ad83c0971c17e5e2948a81ffff16c0" Feb 02 07:11:18 crc kubenswrapper[4842]: I0202 07:11:18.697793 4842 scope.go:117] "RemoveContainer" containerID="5a4746c338d6ea60edc25a0f516095639bc028a5f96d859500d9f30d568afd7f" Feb 02 07:11:18 crc kubenswrapper[4842]: I0202 07:11:18.731914 4842 scope.go:117] "RemoveContainer" containerID="fd930d739c77e2c60500ea7cab9f16a6ba8a914130efb858b41ff112a5549c6c" Feb 02 07:11:18 crc kubenswrapper[4842]: I0202 07:11:18.757006 4842 scope.go:117] "RemoveContainer" containerID="d406c8dd7aa9d060cb8c2e933af0916fc03ef6a4df86a58d035643deda1d435e" Feb 02 07:11:18 crc kubenswrapper[4842]: I0202 07:11:18.783612 4842 scope.go:117] "RemoveContainer" containerID="2b38ab8a50c4bfdef3036052e4dbdb50598c007951f872fa5af56a866e47db58" Feb 02 07:11:18 crc kubenswrapper[4842]: I0202 07:11:18.809892 4842 scope.go:117] "RemoveContainer" containerID="d8fe329dd4b6d5e2f6afa45efa10d42b7ad946aa8ec1ea8a45b86570356f4bd0" Feb 02 07:11:18 crc kubenswrapper[4842]: I0202 07:11:18.839491 4842 scope.go:117] "RemoveContainer" containerID="17bb3eec7905f7b5df5e9c3137f1a5db8fc820e99f038ef4113064b8ca0bb24d" Feb 02 07:11:18 crc kubenswrapper[4842]: I0202 07:11:18.910509 4842 scope.go:117] "RemoveContainer" containerID="baa67ddc95fed558f7c865e018c407b7a90c8fd196753967451af639f1b0851e" Feb 02 07:11:18 crc kubenswrapper[4842]: I0202 07:11:18.934565 4842 scope.go:117] "RemoveContainer" containerID="95018804c3eeb98d3bc4dd01533eb47f23f9335fb411951096ec1c046e6c00c4" Feb 02 07:11:18 crc kubenswrapper[4842]: I0202 07:11:18.962037 4842 scope.go:117] "RemoveContainer" containerID="a5e957fb74580066bf78b8278f65ee1b3e13330434bca538903d73afe512a090" Feb 02 07:11:18 crc kubenswrapper[4842]: I0202 07:11:18.987506 4842 scope.go:117] "RemoveContainer" containerID="be09858b0b26720a1b1eb72e60d3de0b3dbd4ce4a7e6fc548a4d5f3d171165c8" Feb 02 07:11:19 crc kubenswrapper[4842]: I0202 07:11:19.023210 4842 scope.go:117] "RemoveContainer" containerID="1fdc53d1e29c1c53121cfb56667f86dc9ccc9f8da8c68e110eaaab428c59853f" Feb 02 07:11:19 crc kubenswrapper[4842]: I0202 07:11:19.053726 4842 scope.go:117] "RemoveContainer" containerID="8450cdf340185e60d5f4db9ea47d0c0bf9eae39c09e5f2b6a32cf93eac9395f1" Feb 02 07:11:19 crc kubenswrapper[4842]: I0202 07:11:19.082027 4842 scope.go:117] "RemoveContainer" containerID="af9aab2a24cfc4f124984122e483edf359b136da9788f63d0af01da2b636aa44" Feb 02 07:11:23 crc kubenswrapper[4842]: I0202 07:11:23.434042 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:11:23 crc kubenswrapper[4842]: E0202 07:11:23.434917 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:11:34 crc kubenswrapper[4842]: I0202 07:11:34.433825 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:11:34 crc kubenswrapper[4842]: E0202 07:11:34.435368 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:11:45 crc kubenswrapper[4842]: I0202 07:11:45.440459 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:11:45 crc kubenswrapper[4842]: E0202 07:11:45.441204 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:11:56 crc kubenswrapper[4842]: I0202 07:11:56.433497 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:11:56 crc kubenswrapper[4842]: E0202 07:11:56.434432 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:12:09 crc kubenswrapper[4842]: I0202 07:12:09.434593 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:12:09 crc kubenswrapper[4842]: E0202 07:12:09.435779 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:12:19 crc kubenswrapper[4842]: I0202 07:12:19.497932 4842 scope.go:117] "RemoveContainer" containerID="39eb208f6af2deea706cedebd930cca14ea7a25cb9ca73a57ad9dc64e6023a18" Feb 02 07:12:19 crc kubenswrapper[4842]: I0202 07:12:19.560588 4842 scope.go:117] "RemoveContainer" containerID="e6c087a85acb8c56b9934f5572a1bcc68f491cf79f0f8b755c20d672d211503e" Feb 02 07:12:19 crc kubenswrapper[4842]: I0202 07:12:19.609352 4842 scope.go:117] "RemoveContainer" containerID="9a34bab1d66516a5177aafc62bed955fa80608af2d16da47596a9168353c819f" Feb 02 07:12:19 crc kubenswrapper[4842]: I0202 07:12:19.653135 4842 scope.go:117] "RemoveContainer" containerID="7195db1dd98fa99bf79467abe2ecc6133db9df280df7df78ae67b06d2ce5fe42" Feb 02 07:12:19 crc kubenswrapper[4842]: I0202 07:12:19.713045 4842 scope.go:117] "RemoveContainer" containerID="d6ab707ecf1e978e711e1ac029ea3186750e3b41e200559f065ad3d1d57c4081" Feb 02 07:12:19 crc kubenswrapper[4842]: I0202 07:12:19.751481 4842 scope.go:117] "RemoveContainer" containerID="d4afe8e323946b2a091c267fa1099076188f1ad9d2a9b63f7930456fb99f3d8f" Feb 02 07:12:19 crc kubenswrapper[4842]: I0202 07:12:19.777482 4842 scope.go:117] "RemoveContainer" containerID="c1cc1b81874f37b6dd69a794f4c89e58f1e938624f539804095c18ceb3989c67" Feb 02 07:12:19 crc kubenswrapper[4842]: I0202 07:12:19.811299 4842 scope.go:117] "RemoveContainer" containerID="5828541a319e15b9a24397a64ce914d508fb08442c48731c2790845a873ff2cb" Feb 02 07:12:19 crc kubenswrapper[4842]: I0202 07:12:19.842833 4842 scope.go:117] "RemoveContainer" containerID="6586c2e8f7af2e360086efaa4a8a6c6f2493d034bdc7ef3f3fa3fe1325d17da7" Feb 02 07:12:19 crc kubenswrapper[4842]: I0202 07:12:19.869548 4842 scope.go:117] "RemoveContainer" containerID="83c2404b835485135c772ac74f310b1761d22ef1f63c10393be3a87c53fc66aa" Feb 02 07:12:19 crc kubenswrapper[4842]: I0202 07:12:19.894570 4842 scope.go:117] "RemoveContainer" containerID="c9da43fb971a5ef2a720b6588e511324cbe1b669ca26172de540c2c1051786f8" Feb 02 07:12:23 crc kubenswrapper[4842]: I0202 07:12:23.434605 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:12:23 crc kubenswrapper[4842]: E0202 07:12:23.435404 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.927712 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4s2s4"] Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.928394 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6064786a-fa53-47a7-88ee-384cf70a86c6" containerName="ovn-northd" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.928417 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="6064786a-fa53-47a7-88ee-384cf70a86c6" containerName="ovn-northd" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.928443 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-updater" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.928469 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-updater" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.928497 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34f55116-a518-4f21-8816-6f8232a6f68d" containerName="glance-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.928512 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="34f55116-a518-4f21-8816-6f8232a6f68d" containerName="glance-log" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.928543 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34f55116-a518-4f21-8816-6f8232a6f68d" containerName="glance-httpd" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.928559 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="34f55116-a518-4f21-8816-6f8232a6f68d" containerName="glance-httpd" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.928582 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="ceilometer-notification-agent" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.928596 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="ceilometer-notification-agent" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.928764 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="sg-core" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.928780 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="sg-core" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.928798 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="container-updater" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.928814 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="container-updater" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.928847 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb022115-b53a-4ed0-a2a0-b44644dc26a7" containerName="barbican-api" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.928858 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb022115-b53a-4ed0-a2a0-b44644dc26a7" containerName="barbican-api" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.928873 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f94c60e-a4fc-4b7d-96cd-367d46a731c4" containerName="nova-scheduler-scheduler" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.928888 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f94c60e-a4fc-4b7d-96cd-367d46a731c4" containerName="nova-scheduler-scheduler" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.928917 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b11cfdf-ed7a-48ce-97eb-e03cd6be314c" containerName="kube-state-metrics" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.928933 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b11cfdf-ed7a-48ce-97eb-e03cd6be314c" containerName="kube-state-metrics" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.928955 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="709c39fb-802f-4690-89f6-41a717e7244c" containerName="mysql-bootstrap" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.928969 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="709c39fb-802f-4690-89f6-41a717e7244c" containerName="mysql-bootstrap" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.928981 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e4d672b-cb7a-406d-ab62-12745f300ef0" containerName="memcached" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.928995 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e4d672b-cb7a-406d-ab62-12745f300ef0" containerName="memcached" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929027 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="441d47f7-e5dd-456f-b6fa-10a642be6742" containerName="rabbitmq" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929044 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="441d47f7-e5dd-456f-b6fa-10a642be6742" containerName="rabbitmq" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929065 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f0d774-dbe6-45d5-9ffa-64383c8be0d7" containerName="extract-content" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929081 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f0d774-dbe6-45d5-9ffa-64383c8be0d7" containerName="extract-content" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929105 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953bf671-ca79-4208-9bab-672dc079dd82" containerName="neutron-httpd" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929121 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="953bf671-ca79-4208-9bab-672dc079dd82" containerName="neutron-httpd" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929146 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c96a7e1-78c3-449d-9200-735db4ee7086" containerName="glance-httpd" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929162 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c96a7e1-78c3-449d-9200-735db4ee7086" containerName="glance-httpd" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929192 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b912e45d-72e7-4250-9757-add1efcfb054" containerName="mariadb-account-create-update" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929208 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="b912e45d-72e7-4250-9757-add1efcfb054" containerName="mariadb-account-create-update" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929271 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="container-replicator" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929289 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="container-replicator" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929319 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c56025ce-3772-435d-bdba-a4d1ba9d6e2f" containerName="placement-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929335 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c56025ce-3772-435d-bdba-a4d1ba9d6e2f" containerName="placement-log" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929354 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900b2d20-01c8-47e0-8271-ccfd8549d468" containerName="cinder-api-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929370 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="900b2d20-01c8-47e0-8271-ccfd8549d468" containerName="cinder-api-log" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929402 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25609b1c-e1e9-4633-b3e3-93bd2f4396de" containerName="nova-api-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929418 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="25609b1c-e1e9-4633-b3e3-93bd2f4396de" containerName="nova-api-log" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929451 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3d6691d-0283-4dd7-966d-ceba8bde7895" containerName="barbican-worker" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929467 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3d6691d-0283-4dd7-966d-ceba8bde7895" containerName="barbican-worker" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929482 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b912e45d-72e7-4250-9757-add1efcfb054" containerName="mariadb-account-create-update" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929497 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="b912e45d-72e7-4250-9757-add1efcfb054" containerName="mariadb-account-create-update" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929516 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="748756c2-ee60-42ce-835e-bfaa7007d7ac" containerName="barbican-keystone-listener" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929532 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="748756c2-ee60-42ce-835e-bfaa7007d7ac" containerName="barbican-keystone-listener" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929556 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="account-replicator" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929571 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="account-replicator" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929588 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3d6691d-0283-4dd7-966d-ceba8bde7895" containerName="barbican-worker-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929604 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3d6691d-0283-4dd7-966d-ceba8bde7895" containerName="barbican-worker-log" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929635 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="ceilometer-central-agent" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929653 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="ceilometer-central-agent" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929668 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c96a7e1-78c3-449d-9200-735db4ee7086" containerName="glance-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929683 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c96a7e1-78c3-449d-9200-735db4ee7086" containerName="glance-log" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929702 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b2ca532-dbbc-4148-8d2f-fc474685f0bd" containerName="setup-container" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929718 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b2ca532-dbbc-4148-8d2f-fc474685f0bd" containerName="setup-container" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929745 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovsdb-server" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929761 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovsdb-server" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929786 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f0d774-dbe6-45d5-9ffa-64383c8be0d7" containerName="registry-server" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929803 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f0d774-dbe6-45d5-9ffa-64383c8be0d7" containerName="registry-server" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929820 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-replicator" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929836 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-replicator" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929862 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54aa018a-3e7e-4c95-9c1d-387543ed5af0" containerName="nova-metadata-metadata" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929878 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="54aa018a-3e7e-4c95-9c1d-387543ed5af0" containerName="nova-metadata-metadata" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929898 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="account-server" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929913 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="account-server" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929944 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="rsync" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929959 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="rsync" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929990 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="679e6e39-029a-452e-a375-bf0b937e3fbe" containerName="barbican-keystone-listener" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930006 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="679e6e39-029a-452e-a375-bf0b937e3fbe" containerName="barbican-keystone-listener" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930031 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="proxy-httpd" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930048 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="proxy-httpd" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930066 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="679e6e39-029a-452e-a375-bf0b937e3fbe" containerName="barbican-keystone-listener-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930081 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="679e6e39-029a-452e-a375-bf0b937e3fbe" containerName="barbican-keystone-listener-log" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930098 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbda1f81-b862-4ee7-84ce-590c353e4d5b" containerName="nova-cell0-conductor-conductor" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930114 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbda1f81-b862-4ee7-84ce-590c353e4d5b" containerName="nova-cell0-conductor-conductor" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930133 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="swift-recon-cron" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930149 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="swift-recon-cron" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930170 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eff2351-b4e8-43cf-a232-9c36cb11c130" containerName="proxy-server" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930187 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eff2351-b4e8-43cf-a232-9c36cb11c130" containerName="proxy-server" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930249 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f0d774-dbe6-45d5-9ffa-64383c8be0d7" containerName="extract-utilities" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930266 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f0d774-dbe6-45d5-9ffa-64383c8be0d7" containerName="extract-utilities" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930287 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6064786a-fa53-47a7-88ee-384cf70a86c6" containerName="openstack-network-exporter" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930303 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="6064786a-fa53-47a7-88ee-384cf70a86c6" containerName="openstack-network-exporter" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930326 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="709c39fb-802f-4690-89f6-41a717e7244c" containerName="galera" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930342 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="709c39fb-802f-4690-89f6-41a717e7244c" containerName="galera" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930360 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovs-vswitchd" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930375 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovs-vswitchd" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930395 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="container-server" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930411 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="container-server" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930429 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54aa018a-3e7e-4c95-9c1d-387543ed5af0" containerName="nova-metadata-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930444 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="54aa018a-3e7e-4c95-9c1d-387543ed5af0" containerName="nova-metadata-log" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930459 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-auditor" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930473 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-auditor" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930494 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eff2351-b4e8-43cf-a232-9c36cb11c130" containerName="proxy-httpd" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930510 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eff2351-b4e8-43cf-a232-9c36cb11c130" containerName="proxy-httpd" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930534 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovsdb-server-init" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930550 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovsdb-server-init" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930579 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-server" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930594 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-server" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930623 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7343dd67-a085-4da9-8d79-f25ea1e20ca6" containerName="keystone-api" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930637 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="7343dd67-a085-4da9-8d79-f25ea1e20ca6" containerName="keystone-api" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930654 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953bf671-ca79-4208-9bab-672dc079dd82" containerName="neutron-api" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930669 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="953bf671-ca79-4208-9bab-672dc079dd82" containerName="neutron-api" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930690 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-expirer" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930705 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-expirer" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930727 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25609b1c-e1e9-4633-b3e3-93bd2f4396de" containerName="nova-api-api" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930742 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="25609b1c-e1e9-4633-b3e3-93bd2f4396de" containerName="nova-api-api" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930764 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="441d47f7-e5dd-456f-b6fa-10a642be6742" containerName="setup-container" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930778 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="441d47f7-e5dd-456f-b6fa-10a642be6742" containerName="setup-container" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930798 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c56025ce-3772-435d-bdba-a4d1ba9d6e2f" containerName="placement-api" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930813 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c56025ce-3772-435d-bdba-a4d1ba9d6e2f" containerName="placement-api" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930839 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb022115-b53a-4ed0-a2a0-b44644dc26a7" containerName="barbican-api-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930854 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb022115-b53a-4ed0-a2a0-b44644dc26a7" containerName="barbican-api-log" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930876 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="748756c2-ee60-42ce-835e-bfaa7007d7ac" containerName="barbican-keystone-listener-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930892 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="748756c2-ee60-42ce-835e-bfaa7007d7ac" containerName="barbican-keystone-listener-log" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930917 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b2ca532-dbbc-4148-8d2f-fc474685f0bd" containerName="rabbitmq" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930933 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b2ca532-dbbc-4148-8d2f-fc474685f0bd" containerName="rabbitmq" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930954 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="account-auditor" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930970 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="account-auditor" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930991 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd" containerName="barbican-worker" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931006 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd" containerName="barbican-worker" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.931025 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd" containerName="barbican-worker-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931041 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd" containerName="barbican-worker-log" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.931059 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="account-reaper" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931076 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="account-reaper" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.931103 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="container-auditor" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931118 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="container-auditor" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.931135 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900b2d20-01c8-47e0-8271-ccfd8549d468" containerName="cinder-api" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931151 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="900b2d20-01c8-47e0-8271-ccfd8549d468" containerName="cinder-api" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.931164 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4850512e-bbc8-468d-94ef-1d1be3b0b49c" containerName="nova-cell1-conductor-conductor" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931176 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="4850512e-bbc8-468d-94ef-1d1be3b0b49c" containerName="nova-cell1-conductor-conductor" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931470 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="c56025ce-3772-435d-bdba-a4d1ba9d6e2f" containerName="placement-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931494 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb022115-b53a-4ed0-a2a0-b44644dc26a7" containerName="barbican-api-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931518 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-replicator" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931530 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="900b2d20-01c8-47e0-8271-ccfd8549d468" containerName="cinder-api-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931550 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f94c60e-a4fc-4b7d-96cd-367d46a731c4" containerName="nova-scheduler-scheduler" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931565 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="rsync" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931583 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="34f55116-a518-4f21-8816-6f8232a6f68d" containerName="glance-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931599 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="proxy-httpd" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931616 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="container-server" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931631 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-auditor" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931653 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbda1f81-b862-4ee7-84ce-590c353e4d5b" containerName="nova-cell0-conductor-conductor" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931670 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-expirer" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931686 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="54aa018a-3e7e-4c95-9c1d-387543ed5af0" containerName="nova-metadata-metadata" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931708 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="swift-recon-cron" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931721 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e4d672b-cb7a-406d-ab62-12745f300ef0" containerName="memcached" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931734 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="account-auditor" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931747 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="441d47f7-e5dd-456f-b6fa-10a642be6742" containerName="rabbitmq" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931765 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="4850512e-bbc8-468d-94ef-1d1be3b0b49c" containerName="nova-cell1-conductor-conductor" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931786 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="02f0d774-dbe6-45d5-9ffa-64383c8be0d7" containerName="registry-server" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931802 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="748756c2-ee60-42ce-835e-bfaa7007d7ac" containerName="barbican-keystone-listener" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931815 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="6064786a-fa53-47a7-88ee-384cf70a86c6" containerName="ovn-northd" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931869 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="6064786a-fa53-47a7-88ee-384cf70a86c6" containerName="openstack-network-exporter" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931892 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="account-server" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931906 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovs-vswitchd" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931919 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="b912e45d-72e7-4250-9757-add1efcfb054" containerName="mariadb-account-create-update" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931936 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd" containerName="barbican-worker" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931950 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="953bf671-ca79-4208-9bab-672dc079dd82" containerName="neutron-api" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931972 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="account-reaper" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931991 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b2ca532-dbbc-4148-8d2f-fc474685f0bd" containerName="rabbitmq" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932010 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="25609b1c-e1e9-4633-b3e3-93bd2f4396de" containerName="nova-api-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932028 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="7343dd67-a085-4da9-8d79-f25ea1e20ca6" containerName="keystone-api" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932042 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="34f55116-a518-4f21-8816-6f8232a6f68d" containerName="glance-httpd" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932060 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b11cfdf-ed7a-48ce-97eb-e03cd6be314c" containerName="kube-state-metrics" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932080 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="c56025ce-3772-435d-bdba-a4d1ba9d6e2f" containerName="placement-api" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932093 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="900b2d20-01c8-47e0-8271-ccfd8549d468" containerName="cinder-api" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932110 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd" containerName="barbican-worker-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932126 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="709c39fb-802f-4690-89f6-41a717e7244c" containerName="galera" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932142 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="54aa018a-3e7e-4c95-9c1d-387543ed5af0" containerName="nova-metadata-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932161 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="679e6e39-029a-452e-a375-bf0b937e3fbe" containerName="barbican-keystone-listener" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932174 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eff2351-b4e8-43cf-a232-9c36cb11c130" containerName="proxy-server" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932190 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3d6691d-0283-4dd7-966d-ceba8bde7895" containerName="barbican-worker" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932204 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-server" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932245 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c96a7e1-78c3-449d-9200-735db4ee7086" containerName="glance-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932261 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovsdb-server" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932277 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="container-updater" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932291 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="account-replicator" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932304 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="748756c2-ee60-42ce-835e-bfaa7007d7ac" containerName="barbican-keystone-listener-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932323 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="ceilometer-central-agent" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932345 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="679e6e39-029a-452e-a375-bf0b937e3fbe" containerName="barbican-keystone-listener-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932362 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb022115-b53a-4ed0-a2a0-b44644dc26a7" containerName="barbican-api" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932379 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3d6691d-0283-4dd7-966d-ceba8bde7895" containerName="barbican-worker-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932397 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="sg-core" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932416 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c96a7e1-78c3-449d-9200-735db4ee7086" containerName="glance-httpd" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932430 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="25609b1c-e1e9-4633-b3e3-93bd2f4396de" containerName="nova-api-api" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932445 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="ceilometer-notification-agent" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932464 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-updater" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932480 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="container-auditor" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932494 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="container-replicator" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932510 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eff2351-b4e8-43cf-a232-9c36cb11c130" containerName="proxy-httpd" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932522 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="953bf671-ca79-4208-9bab-672dc079dd82" containerName="neutron-httpd" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932986 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="b912e45d-72e7-4250-9757-add1efcfb054" containerName="mariadb-account-create-update" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.934259 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.965117 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4s2s4"] Feb 02 07:12:25 crc kubenswrapper[4842]: I0202 07:12:25.126080 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bcr4\" (UniqueName: \"kubernetes.io/projected/99f8d884-14b5-451d-9fdc-fc33e7615919-kube-api-access-9bcr4\") pod \"redhat-marketplace-4s2s4\" (UID: \"99f8d884-14b5-451d-9fdc-fc33e7615919\") " pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:25 crc kubenswrapper[4842]: I0202 07:12:25.126255 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99f8d884-14b5-451d-9fdc-fc33e7615919-catalog-content\") pod \"redhat-marketplace-4s2s4\" (UID: \"99f8d884-14b5-451d-9fdc-fc33e7615919\") " pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:25 crc kubenswrapper[4842]: I0202 07:12:25.126293 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99f8d884-14b5-451d-9fdc-fc33e7615919-utilities\") pod \"redhat-marketplace-4s2s4\" (UID: \"99f8d884-14b5-451d-9fdc-fc33e7615919\") " pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:25 crc kubenswrapper[4842]: I0202 07:12:25.227300 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99f8d884-14b5-451d-9fdc-fc33e7615919-utilities\") pod \"redhat-marketplace-4s2s4\" (UID: \"99f8d884-14b5-451d-9fdc-fc33e7615919\") " pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:25 crc kubenswrapper[4842]: I0202 07:12:25.227401 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bcr4\" (UniqueName: \"kubernetes.io/projected/99f8d884-14b5-451d-9fdc-fc33e7615919-kube-api-access-9bcr4\") pod \"redhat-marketplace-4s2s4\" (UID: \"99f8d884-14b5-451d-9fdc-fc33e7615919\") " pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:25 crc kubenswrapper[4842]: I0202 07:12:25.227463 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99f8d884-14b5-451d-9fdc-fc33e7615919-catalog-content\") pod \"redhat-marketplace-4s2s4\" (UID: \"99f8d884-14b5-451d-9fdc-fc33e7615919\") " pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:25 crc kubenswrapper[4842]: I0202 07:12:25.228022 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99f8d884-14b5-451d-9fdc-fc33e7615919-utilities\") pod \"redhat-marketplace-4s2s4\" (UID: \"99f8d884-14b5-451d-9fdc-fc33e7615919\") " pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:25 crc kubenswrapper[4842]: I0202 07:12:25.228033 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99f8d884-14b5-451d-9fdc-fc33e7615919-catalog-content\") pod \"redhat-marketplace-4s2s4\" (UID: \"99f8d884-14b5-451d-9fdc-fc33e7615919\") " pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:25 crc kubenswrapper[4842]: I0202 07:12:25.245144 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bcr4\" (UniqueName: \"kubernetes.io/projected/99f8d884-14b5-451d-9fdc-fc33e7615919-kube-api-access-9bcr4\") pod \"redhat-marketplace-4s2s4\" (UID: \"99f8d884-14b5-451d-9fdc-fc33e7615919\") " pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:25 crc kubenswrapper[4842]: I0202 07:12:25.264417 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:25 crc kubenswrapper[4842]: I0202 07:12:25.748197 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4s2s4"] Feb 02 07:12:25 crc kubenswrapper[4842]: I0202 07:12:25.913817 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4s2s4" event={"ID":"99f8d884-14b5-451d-9fdc-fc33e7615919","Type":"ContainerStarted","Data":"24d91f3012e33754aacb4102942da6f61dfa4b5e76f13f807231a7a0da746b65"} Feb 02 07:12:26 crc kubenswrapper[4842]: I0202 07:12:26.924889 4842 generic.go:334] "Generic (PLEG): container finished" podID="99f8d884-14b5-451d-9fdc-fc33e7615919" containerID="6e7f3dc221760300eb89a57893eb25784296cb5d5a4ffe41eda08502ffed75bd" exitCode=0 Feb 02 07:12:26 crc kubenswrapper[4842]: I0202 07:12:26.924992 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4s2s4" event={"ID":"99f8d884-14b5-451d-9fdc-fc33e7615919","Type":"ContainerDied","Data":"6e7f3dc221760300eb89a57893eb25784296cb5d5a4ffe41eda08502ffed75bd"} Feb 02 07:12:27 crc kubenswrapper[4842]: I0202 07:12:27.939825 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4s2s4" event={"ID":"99f8d884-14b5-451d-9fdc-fc33e7615919","Type":"ContainerStarted","Data":"e46d814da721b9a886afe1e704d19f4d623a07fc712f86204a903efb81cb3a5b"} Feb 02 07:12:28 crc kubenswrapper[4842]: I0202 07:12:28.953400 4842 generic.go:334] "Generic (PLEG): container finished" podID="99f8d884-14b5-451d-9fdc-fc33e7615919" containerID="e46d814da721b9a886afe1e704d19f4d623a07fc712f86204a903efb81cb3a5b" exitCode=0 Feb 02 07:12:28 crc kubenswrapper[4842]: I0202 07:12:28.953475 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4s2s4" event={"ID":"99f8d884-14b5-451d-9fdc-fc33e7615919","Type":"ContainerDied","Data":"e46d814da721b9a886afe1e704d19f4d623a07fc712f86204a903efb81cb3a5b"} Feb 02 07:12:29 crc kubenswrapper[4842]: I0202 07:12:29.968081 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4s2s4" event={"ID":"99f8d884-14b5-451d-9fdc-fc33e7615919","Type":"ContainerStarted","Data":"db5a266381872b2d9b47a4edd02f653cfac12b456b45fea6401c1cbadafe2173"} Feb 02 07:12:29 crc kubenswrapper[4842]: I0202 07:12:29.996331 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4s2s4" podStartSLOduration=3.590215585 podStartE2EDuration="5.996314873s" podCreationTimestamp="2026-02-02 07:12:24 +0000 UTC" firstStartedPulling="2026-02-02 07:12:26.927333543 +0000 UTC m=+1572.304601455" lastFinishedPulling="2026-02-02 07:12:29.333432781 +0000 UTC m=+1574.710700743" observedRunningTime="2026-02-02 07:12:29.994493288 +0000 UTC m=+1575.371761240" watchObservedRunningTime="2026-02-02 07:12:29.996314873 +0000 UTC m=+1575.373582795" Feb 02 07:12:35 crc kubenswrapper[4842]: I0202 07:12:35.265040 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:35 crc kubenswrapper[4842]: I0202 07:12:35.266819 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:35 crc kubenswrapper[4842]: I0202 07:12:35.329578 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:36 crc kubenswrapper[4842]: I0202 07:12:36.108010 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:36 crc kubenswrapper[4842]: I0202 07:12:36.169154 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4s2s4"] Feb 02 07:12:38 crc kubenswrapper[4842]: I0202 07:12:38.042898 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4s2s4" podUID="99f8d884-14b5-451d-9fdc-fc33e7615919" containerName="registry-server" containerID="cri-o://db5a266381872b2d9b47a4edd02f653cfac12b456b45fea6401c1cbadafe2173" gracePeriod=2 Feb 02 07:12:38 crc kubenswrapper[4842]: I0202 07:12:38.434432 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:12:38 crc kubenswrapper[4842]: E0202 07:12:38.435119 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:12:38 crc kubenswrapper[4842]: I0202 07:12:38.567837 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:38 crc kubenswrapper[4842]: I0202 07:12:38.666919 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99f8d884-14b5-451d-9fdc-fc33e7615919-catalog-content\") pod \"99f8d884-14b5-451d-9fdc-fc33e7615919\" (UID: \"99f8d884-14b5-451d-9fdc-fc33e7615919\") " Feb 02 07:12:38 crc kubenswrapper[4842]: I0202 07:12:38.667019 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bcr4\" (UniqueName: \"kubernetes.io/projected/99f8d884-14b5-451d-9fdc-fc33e7615919-kube-api-access-9bcr4\") pod \"99f8d884-14b5-451d-9fdc-fc33e7615919\" (UID: \"99f8d884-14b5-451d-9fdc-fc33e7615919\") " Feb 02 07:12:38 crc kubenswrapper[4842]: I0202 07:12:38.667070 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99f8d884-14b5-451d-9fdc-fc33e7615919-utilities\") pod \"99f8d884-14b5-451d-9fdc-fc33e7615919\" (UID: \"99f8d884-14b5-451d-9fdc-fc33e7615919\") " Feb 02 07:12:38 crc kubenswrapper[4842]: I0202 07:12:38.668802 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99f8d884-14b5-451d-9fdc-fc33e7615919-utilities" (OuterVolumeSpecName: "utilities") pod "99f8d884-14b5-451d-9fdc-fc33e7615919" (UID: "99f8d884-14b5-451d-9fdc-fc33e7615919"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:12:38 crc kubenswrapper[4842]: I0202 07:12:38.674131 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99f8d884-14b5-451d-9fdc-fc33e7615919-kube-api-access-9bcr4" (OuterVolumeSpecName: "kube-api-access-9bcr4") pod "99f8d884-14b5-451d-9fdc-fc33e7615919" (UID: "99f8d884-14b5-451d-9fdc-fc33e7615919"). InnerVolumeSpecName "kube-api-access-9bcr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:12:38 crc kubenswrapper[4842]: I0202 07:12:38.701732 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99f8d884-14b5-451d-9fdc-fc33e7615919-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99f8d884-14b5-451d-9fdc-fc33e7615919" (UID: "99f8d884-14b5-451d-9fdc-fc33e7615919"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:12:38 crc kubenswrapper[4842]: I0202 07:12:38.768782 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99f8d884-14b5-451d-9fdc-fc33e7615919-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:12:38 crc kubenswrapper[4842]: I0202 07:12:38.768819 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bcr4\" (UniqueName: \"kubernetes.io/projected/99f8d884-14b5-451d-9fdc-fc33e7615919-kube-api-access-9bcr4\") on node \"crc\" DevicePath \"\"" Feb 02 07:12:38 crc kubenswrapper[4842]: I0202 07:12:38.768850 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99f8d884-14b5-451d-9fdc-fc33e7615919-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:12:39 crc kubenswrapper[4842]: I0202 07:12:39.058322 4842 generic.go:334] "Generic (PLEG): container finished" podID="99f8d884-14b5-451d-9fdc-fc33e7615919" containerID="db5a266381872b2d9b47a4edd02f653cfac12b456b45fea6401c1cbadafe2173" exitCode=0 Feb 02 07:12:39 crc kubenswrapper[4842]: I0202 07:12:39.058371 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4s2s4" event={"ID":"99f8d884-14b5-451d-9fdc-fc33e7615919","Type":"ContainerDied","Data":"db5a266381872b2d9b47a4edd02f653cfac12b456b45fea6401c1cbadafe2173"} Feb 02 07:12:39 crc kubenswrapper[4842]: I0202 07:12:39.058445 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:39 crc kubenswrapper[4842]: I0202 07:12:39.058473 4842 scope.go:117] "RemoveContainer" containerID="db5a266381872b2d9b47a4edd02f653cfac12b456b45fea6401c1cbadafe2173" Feb 02 07:12:39 crc kubenswrapper[4842]: I0202 07:12:39.058450 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4s2s4" event={"ID":"99f8d884-14b5-451d-9fdc-fc33e7615919","Type":"ContainerDied","Data":"24d91f3012e33754aacb4102942da6f61dfa4b5e76f13f807231a7a0da746b65"} Feb 02 07:12:39 crc kubenswrapper[4842]: I0202 07:12:39.100479 4842 scope.go:117] "RemoveContainer" containerID="e46d814da721b9a886afe1e704d19f4d623a07fc712f86204a903efb81cb3a5b" Feb 02 07:12:39 crc kubenswrapper[4842]: I0202 07:12:39.121303 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4s2s4"] Feb 02 07:12:39 crc kubenswrapper[4842]: I0202 07:12:39.127802 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4s2s4"] Feb 02 07:12:39 crc kubenswrapper[4842]: I0202 07:12:39.130296 4842 scope.go:117] "RemoveContainer" containerID="6e7f3dc221760300eb89a57893eb25784296cb5d5a4ffe41eda08502ffed75bd" Feb 02 07:12:39 crc kubenswrapper[4842]: I0202 07:12:39.177579 4842 scope.go:117] "RemoveContainer" containerID="db5a266381872b2d9b47a4edd02f653cfac12b456b45fea6401c1cbadafe2173" Feb 02 07:12:39 crc kubenswrapper[4842]: E0202 07:12:39.178013 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db5a266381872b2d9b47a4edd02f653cfac12b456b45fea6401c1cbadafe2173\": container with ID starting with db5a266381872b2d9b47a4edd02f653cfac12b456b45fea6401c1cbadafe2173 not found: ID does not exist" containerID="db5a266381872b2d9b47a4edd02f653cfac12b456b45fea6401c1cbadafe2173" Feb 02 07:12:39 crc kubenswrapper[4842]: I0202 07:12:39.178052 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db5a266381872b2d9b47a4edd02f653cfac12b456b45fea6401c1cbadafe2173"} err="failed to get container status \"db5a266381872b2d9b47a4edd02f653cfac12b456b45fea6401c1cbadafe2173\": rpc error: code = NotFound desc = could not find container \"db5a266381872b2d9b47a4edd02f653cfac12b456b45fea6401c1cbadafe2173\": container with ID starting with db5a266381872b2d9b47a4edd02f653cfac12b456b45fea6401c1cbadafe2173 not found: ID does not exist" Feb 02 07:12:39 crc kubenswrapper[4842]: I0202 07:12:39.178080 4842 scope.go:117] "RemoveContainer" containerID="e46d814da721b9a886afe1e704d19f4d623a07fc712f86204a903efb81cb3a5b" Feb 02 07:12:39 crc kubenswrapper[4842]: E0202 07:12:39.178638 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e46d814da721b9a886afe1e704d19f4d623a07fc712f86204a903efb81cb3a5b\": container with ID starting with e46d814da721b9a886afe1e704d19f4d623a07fc712f86204a903efb81cb3a5b not found: ID does not exist" containerID="e46d814da721b9a886afe1e704d19f4d623a07fc712f86204a903efb81cb3a5b" Feb 02 07:12:39 crc kubenswrapper[4842]: I0202 07:12:39.178664 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e46d814da721b9a886afe1e704d19f4d623a07fc712f86204a903efb81cb3a5b"} err="failed to get container status \"e46d814da721b9a886afe1e704d19f4d623a07fc712f86204a903efb81cb3a5b\": rpc error: code = NotFound desc = could not find container \"e46d814da721b9a886afe1e704d19f4d623a07fc712f86204a903efb81cb3a5b\": container with ID starting with e46d814da721b9a886afe1e704d19f4d623a07fc712f86204a903efb81cb3a5b not found: ID does not exist" Feb 02 07:12:39 crc kubenswrapper[4842]: I0202 07:12:39.178677 4842 scope.go:117] "RemoveContainer" containerID="6e7f3dc221760300eb89a57893eb25784296cb5d5a4ffe41eda08502ffed75bd" Feb 02 07:12:39 crc kubenswrapper[4842]: E0202 07:12:39.179018 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e7f3dc221760300eb89a57893eb25784296cb5d5a4ffe41eda08502ffed75bd\": container with ID starting with 6e7f3dc221760300eb89a57893eb25784296cb5d5a4ffe41eda08502ffed75bd not found: ID does not exist" containerID="6e7f3dc221760300eb89a57893eb25784296cb5d5a4ffe41eda08502ffed75bd" Feb 02 07:12:39 crc kubenswrapper[4842]: I0202 07:12:39.179067 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e7f3dc221760300eb89a57893eb25784296cb5d5a4ffe41eda08502ffed75bd"} err="failed to get container status \"6e7f3dc221760300eb89a57893eb25784296cb5d5a4ffe41eda08502ffed75bd\": rpc error: code = NotFound desc = could not find container \"6e7f3dc221760300eb89a57893eb25784296cb5d5a4ffe41eda08502ffed75bd\": container with ID starting with 6e7f3dc221760300eb89a57893eb25784296cb5d5a4ffe41eda08502ffed75bd not found: ID does not exist" Feb 02 07:12:39 crc kubenswrapper[4842]: I0202 07:12:39.449545 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99f8d884-14b5-451d-9fdc-fc33e7615919" path="/var/lib/kubelet/pods/99f8d884-14b5-451d-9fdc-fc33e7615919/volumes" Feb 02 07:12:49 crc kubenswrapper[4842]: I0202 07:12:49.434321 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:12:49 crc kubenswrapper[4842]: E0202 07:12:49.435411 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:13:00 crc kubenswrapper[4842]: I0202 07:13:00.433838 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:13:00 crc kubenswrapper[4842]: E0202 07:13:00.434962 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.276326 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pqgtv"] Feb 02 07:13:01 crc kubenswrapper[4842]: E0202 07:13:01.277161 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f8d884-14b5-451d-9fdc-fc33e7615919" containerName="registry-server" Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.277198 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f8d884-14b5-451d-9fdc-fc33e7615919" containerName="registry-server" Feb 02 07:13:01 crc kubenswrapper[4842]: E0202 07:13:01.277264 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f8d884-14b5-451d-9fdc-fc33e7615919" containerName="extract-utilities" Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.277275 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f8d884-14b5-451d-9fdc-fc33e7615919" containerName="extract-utilities" Feb 02 07:13:01 crc kubenswrapper[4842]: E0202 07:13:01.277293 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f8d884-14b5-451d-9fdc-fc33e7615919" containerName="extract-content" Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.277312 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f8d884-14b5-451d-9fdc-fc33e7615919" containerName="extract-content" Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.277740 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="99f8d884-14b5-451d-9fdc-fc33e7615919" containerName="registry-server" Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.279930 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.290962 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pqgtv"] Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.456334 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8eb678e-c4b4-4c94-ad98-b3327276614e-catalog-content\") pod \"community-operators-pqgtv\" (UID: \"a8eb678e-c4b4-4c94-ad98-b3327276614e\") " pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.456772 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8eb678e-c4b4-4c94-ad98-b3327276614e-utilities\") pod \"community-operators-pqgtv\" (UID: \"a8eb678e-c4b4-4c94-ad98-b3327276614e\") " pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.456813 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss4hv\" (UniqueName: \"kubernetes.io/projected/a8eb678e-c4b4-4c94-ad98-b3327276614e-kube-api-access-ss4hv\") pod \"community-operators-pqgtv\" (UID: \"a8eb678e-c4b4-4c94-ad98-b3327276614e\") " pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.557887 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8eb678e-c4b4-4c94-ad98-b3327276614e-catalog-content\") pod \"community-operators-pqgtv\" (UID: \"a8eb678e-c4b4-4c94-ad98-b3327276614e\") " pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.557937 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8eb678e-c4b4-4c94-ad98-b3327276614e-utilities\") pod \"community-operators-pqgtv\" (UID: \"a8eb678e-c4b4-4c94-ad98-b3327276614e\") " pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.557976 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss4hv\" (UniqueName: \"kubernetes.io/projected/a8eb678e-c4b4-4c94-ad98-b3327276614e-kube-api-access-ss4hv\") pod \"community-operators-pqgtv\" (UID: \"a8eb678e-c4b4-4c94-ad98-b3327276614e\") " pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.558454 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8eb678e-c4b4-4c94-ad98-b3327276614e-catalog-content\") pod \"community-operators-pqgtv\" (UID: \"a8eb678e-c4b4-4c94-ad98-b3327276614e\") " pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.558811 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8eb678e-c4b4-4c94-ad98-b3327276614e-utilities\") pod \"community-operators-pqgtv\" (UID: \"a8eb678e-c4b4-4c94-ad98-b3327276614e\") " pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.576806 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss4hv\" (UniqueName: \"kubernetes.io/projected/a8eb678e-c4b4-4c94-ad98-b3327276614e-kube-api-access-ss4hv\") pod \"community-operators-pqgtv\" (UID: \"a8eb678e-c4b4-4c94-ad98-b3327276614e\") " pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.614096 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:02 crc kubenswrapper[4842]: I0202 07:13:02.079923 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pqgtv"] Feb 02 07:13:02 crc kubenswrapper[4842]: W0202 07:13:02.082492 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8eb678e_c4b4_4c94_ad98_b3327276614e.slice/crio-f8c9b8444d00675f23f6d9dd71b5ba964158a6776857772c61254d542aa6af15 WatchSource:0}: Error finding container f8c9b8444d00675f23f6d9dd71b5ba964158a6776857772c61254d542aa6af15: Status 404 returned error can't find the container with id f8c9b8444d00675f23f6d9dd71b5ba964158a6776857772c61254d542aa6af15 Feb 02 07:13:02 crc kubenswrapper[4842]: I0202 07:13:02.296840 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqgtv" event={"ID":"a8eb678e-c4b4-4c94-ad98-b3327276614e","Type":"ContainerStarted","Data":"25cdab15747e575edf63cc27f41f20f404ae3e0d124509a049a546fd072db81e"} Feb 02 07:13:02 crc kubenswrapper[4842]: I0202 07:13:02.296901 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqgtv" event={"ID":"a8eb678e-c4b4-4c94-ad98-b3327276614e","Type":"ContainerStarted","Data":"f8c9b8444d00675f23f6d9dd71b5ba964158a6776857772c61254d542aa6af15"} Feb 02 07:13:03 crc kubenswrapper[4842]: I0202 07:13:03.311949 4842 generic.go:334] "Generic (PLEG): container finished" podID="a8eb678e-c4b4-4c94-ad98-b3327276614e" containerID="25cdab15747e575edf63cc27f41f20f404ae3e0d124509a049a546fd072db81e" exitCode=0 Feb 02 07:13:03 crc kubenswrapper[4842]: I0202 07:13:03.312006 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqgtv" event={"ID":"a8eb678e-c4b4-4c94-ad98-b3327276614e","Type":"ContainerDied","Data":"25cdab15747e575edf63cc27f41f20f404ae3e0d124509a049a546fd072db81e"} Feb 02 07:13:03 crc kubenswrapper[4842]: I0202 07:13:03.314262 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqgtv" event={"ID":"a8eb678e-c4b4-4c94-ad98-b3327276614e","Type":"ContainerStarted","Data":"d0ea6803f226d8f2249251471f407ed70ffa7b8703286ab085b6aa52044d42eb"} Feb 02 07:13:04 crc kubenswrapper[4842]: I0202 07:13:04.327953 4842 generic.go:334] "Generic (PLEG): container finished" podID="a8eb678e-c4b4-4c94-ad98-b3327276614e" containerID="d0ea6803f226d8f2249251471f407ed70ffa7b8703286ab085b6aa52044d42eb" exitCode=0 Feb 02 07:13:04 crc kubenswrapper[4842]: I0202 07:13:04.328024 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqgtv" event={"ID":"a8eb678e-c4b4-4c94-ad98-b3327276614e","Type":"ContainerDied","Data":"d0ea6803f226d8f2249251471f407ed70ffa7b8703286ab085b6aa52044d42eb"} Feb 02 07:13:05 crc kubenswrapper[4842]: I0202 07:13:05.341275 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqgtv" event={"ID":"a8eb678e-c4b4-4c94-ad98-b3327276614e","Type":"ContainerStarted","Data":"f5604ba2068b2ea28e132ae4f7ec4f98ae0a5739b41c47ef7c4c3eb8e2c5eb8f"} Feb 02 07:13:05 crc kubenswrapper[4842]: I0202 07:13:05.371462 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pqgtv" podStartSLOduration=1.936444275 podStartE2EDuration="4.371431339s" podCreationTimestamp="2026-02-02 07:13:01 +0000 UTC" firstStartedPulling="2026-02-02 07:13:02.299364693 +0000 UTC m=+1607.676632645" lastFinishedPulling="2026-02-02 07:13:04.734351787 +0000 UTC m=+1610.111619709" observedRunningTime="2026-02-02 07:13:05.367524673 +0000 UTC m=+1610.744792655" watchObservedRunningTime="2026-02-02 07:13:05.371431339 +0000 UTC m=+1610.748699291" Feb 02 07:13:11 crc kubenswrapper[4842]: I0202 07:13:11.614786 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:11 crc kubenswrapper[4842]: I0202 07:13:11.615641 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:11 crc kubenswrapper[4842]: I0202 07:13:11.690508 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:12 crc kubenswrapper[4842]: I0202 07:13:12.434283 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:13:12 crc kubenswrapper[4842]: E0202 07:13:12.434727 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:13:12 crc kubenswrapper[4842]: I0202 07:13:12.488256 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:12 crc kubenswrapper[4842]: I0202 07:13:12.565343 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pqgtv"] Feb 02 07:13:14 crc kubenswrapper[4842]: I0202 07:13:14.432363 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pqgtv" podUID="a8eb678e-c4b4-4c94-ad98-b3327276614e" containerName="registry-server" containerID="cri-o://f5604ba2068b2ea28e132ae4f7ec4f98ae0a5739b41c47ef7c4c3eb8e2c5eb8f" gracePeriod=2 Feb 02 07:13:14 crc kubenswrapper[4842]: I0202 07:13:14.969069 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.082546 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8eb678e-c4b4-4c94-ad98-b3327276614e-catalog-content\") pod \"a8eb678e-c4b4-4c94-ad98-b3327276614e\" (UID: \"a8eb678e-c4b4-4c94-ad98-b3327276614e\") " Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.082635 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss4hv\" (UniqueName: \"kubernetes.io/projected/a8eb678e-c4b4-4c94-ad98-b3327276614e-kube-api-access-ss4hv\") pod \"a8eb678e-c4b4-4c94-ad98-b3327276614e\" (UID: \"a8eb678e-c4b4-4c94-ad98-b3327276614e\") " Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.082798 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8eb678e-c4b4-4c94-ad98-b3327276614e-utilities\") pod \"a8eb678e-c4b4-4c94-ad98-b3327276614e\" (UID: \"a8eb678e-c4b4-4c94-ad98-b3327276614e\") " Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.083866 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8eb678e-c4b4-4c94-ad98-b3327276614e-utilities" (OuterVolumeSpecName: "utilities") pod "a8eb678e-c4b4-4c94-ad98-b3327276614e" (UID: "a8eb678e-c4b4-4c94-ad98-b3327276614e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.091091 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8eb678e-c4b4-4c94-ad98-b3327276614e-kube-api-access-ss4hv" (OuterVolumeSpecName: "kube-api-access-ss4hv") pod "a8eb678e-c4b4-4c94-ad98-b3327276614e" (UID: "a8eb678e-c4b4-4c94-ad98-b3327276614e"). InnerVolumeSpecName "kube-api-access-ss4hv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.159554 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8eb678e-c4b4-4c94-ad98-b3327276614e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8eb678e-c4b4-4c94-ad98-b3327276614e" (UID: "a8eb678e-c4b4-4c94-ad98-b3327276614e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.185025 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8eb678e-c4b4-4c94-ad98-b3327276614e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.185241 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8eb678e-c4b4-4c94-ad98-b3327276614e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.185353 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss4hv\" (UniqueName: \"kubernetes.io/projected/a8eb678e-c4b4-4c94-ad98-b3327276614e-kube-api-access-ss4hv\") on node \"crc\" DevicePath \"\"" Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.445095 4842 generic.go:334] "Generic (PLEG): container finished" podID="a8eb678e-c4b4-4c94-ad98-b3327276614e" containerID="f5604ba2068b2ea28e132ae4f7ec4f98ae0a5739b41c47ef7c4c3eb8e2c5eb8f" exitCode=0 Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.445318 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.453714 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqgtv" event={"ID":"a8eb678e-c4b4-4c94-ad98-b3327276614e","Type":"ContainerDied","Data":"f5604ba2068b2ea28e132ae4f7ec4f98ae0a5739b41c47ef7c4c3eb8e2c5eb8f"} Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.454322 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqgtv" event={"ID":"a8eb678e-c4b4-4c94-ad98-b3327276614e","Type":"ContainerDied","Data":"f8c9b8444d00675f23f6d9dd71b5ba964158a6776857772c61254d542aa6af15"} Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.454345 4842 scope.go:117] "RemoveContainer" containerID="f5604ba2068b2ea28e132ae4f7ec4f98ae0a5739b41c47ef7c4c3eb8e2c5eb8f" Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.497784 4842 scope.go:117] "RemoveContainer" containerID="d0ea6803f226d8f2249251471f407ed70ffa7b8703286ab085b6aa52044d42eb" Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.505715 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pqgtv"] Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.514674 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pqgtv"] Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.534795 4842 scope.go:117] "RemoveContainer" containerID="25cdab15747e575edf63cc27f41f20f404ae3e0d124509a049a546fd072db81e" Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.564707 4842 scope.go:117] "RemoveContainer" containerID="f5604ba2068b2ea28e132ae4f7ec4f98ae0a5739b41c47ef7c4c3eb8e2c5eb8f" Feb 02 07:13:15 crc kubenswrapper[4842]: E0202 07:13:15.565173 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5604ba2068b2ea28e132ae4f7ec4f98ae0a5739b41c47ef7c4c3eb8e2c5eb8f\": container with ID starting with f5604ba2068b2ea28e132ae4f7ec4f98ae0a5739b41c47ef7c4c3eb8e2c5eb8f not found: ID does not exist" containerID="f5604ba2068b2ea28e132ae4f7ec4f98ae0a5739b41c47ef7c4c3eb8e2c5eb8f" Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.565313 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5604ba2068b2ea28e132ae4f7ec4f98ae0a5739b41c47ef7c4c3eb8e2c5eb8f"} err="failed to get container status \"f5604ba2068b2ea28e132ae4f7ec4f98ae0a5739b41c47ef7c4c3eb8e2c5eb8f\": rpc error: code = NotFound desc = could not find container \"f5604ba2068b2ea28e132ae4f7ec4f98ae0a5739b41c47ef7c4c3eb8e2c5eb8f\": container with ID starting with f5604ba2068b2ea28e132ae4f7ec4f98ae0a5739b41c47ef7c4c3eb8e2c5eb8f not found: ID does not exist" Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.565354 4842 scope.go:117] "RemoveContainer" containerID="d0ea6803f226d8f2249251471f407ed70ffa7b8703286ab085b6aa52044d42eb" Feb 02 07:13:15 crc kubenswrapper[4842]: E0202 07:13:15.566124 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0ea6803f226d8f2249251471f407ed70ffa7b8703286ab085b6aa52044d42eb\": container with ID starting with d0ea6803f226d8f2249251471f407ed70ffa7b8703286ab085b6aa52044d42eb not found: ID does not exist" containerID="d0ea6803f226d8f2249251471f407ed70ffa7b8703286ab085b6aa52044d42eb" Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.566194 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0ea6803f226d8f2249251471f407ed70ffa7b8703286ab085b6aa52044d42eb"} err="failed to get container status \"d0ea6803f226d8f2249251471f407ed70ffa7b8703286ab085b6aa52044d42eb\": rpc error: code = NotFound desc = could not find container \"d0ea6803f226d8f2249251471f407ed70ffa7b8703286ab085b6aa52044d42eb\": container with ID starting with d0ea6803f226d8f2249251471f407ed70ffa7b8703286ab085b6aa52044d42eb not found: ID does not exist" Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.566281 4842 scope.go:117] "RemoveContainer" containerID="25cdab15747e575edf63cc27f41f20f404ae3e0d124509a049a546fd072db81e" Feb 02 07:13:15 crc kubenswrapper[4842]: E0202 07:13:15.567181 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25cdab15747e575edf63cc27f41f20f404ae3e0d124509a049a546fd072db81e\": container with ID starting with 25cdab15747e575edf63cc27f41f20f404ae3e0d124509a049a546fd072db81e not found: ID does not exist" containerID="25cdab15747e575edf63cc27f41f20f404ae3e0d124509a049a546fd072db81e" Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.567256 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25cdab15747e575edf63cc27f41f20f404ae3e0d124509a049a546fd072db81e"} err="failed to get container status \"25cdab15747e575edf63cc27f41f20f404ae3e0d124509a049a546fd072db81e\": rpc error: code = NotFound desc = could not find container \"25cdab15747e575edf63cc27f41f20f404ae3e0d124509a049a546fd072db81e\": container with ID starting with 25cdab15747e575edf63cc27f41f20f404ae3e0d124509a049a546fd072db81e not found: ID does not exist" Feb 02 07:13:16 crc kubenswrapper[4842]: E0202 07:13:16.337514 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8eb678e_c4b4_4c94_ad98_b3327276614e.slice\": RecentStats: unable to find data in memory cache]" Feb 02 07:13:17 crc kubenswrapper[4842]: I0202 07:13:17.471453 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8eb678e-c4b4-4c94-ad98-b3327276614e" path="/var/lib/kubelet/pods/a8eb678e-c4b4-4c94-ad98-b3327276614e/volumes" Feb 02 07:13:20 crc kubenswrapper[4842]: I0202 07:13:20.073554 4842 scope.go:117] "RemoveContainer" containerID="f5f4ebc4957f3bd8515b3e4a7d7bf4b7c05ae94bf9d531ffc8914bcdc9bde611" Feb 02 07:13:20 crc kubenswrapper[4842]: I0202 07:13:20.135498 4842 scope.go:117] "RemoveContainer" containerID="7b7d5e5edb2af232c2055e5da49c69d329f4113726a849604a2b594aefa2f3af" Feb 02 07:13:20 crc kubenswrapper[4842]: I0202 07:13:20.161120 4842 scope.go:117] "RemoveContainer" containerID="b2f7cb4727d9784f10ff6a0c8a30a31bb44be887023eca0a860978903f19daa6" Feb 02 07:13:20 crc kubenswrapper[4842]: I0202 07:13:20.197190 4842 scope.go:117] "RemoveContainer" containerID="36b2b05bbe375b399c98b67e29fc0579c7a94211ddd64f7ddba9592374c382bd" Feb 02 07:13:20 crc kubenswrapper[4842]: I0202 07:13:20.226413 4842 scope.go:117] "RemoveContainer" containerID="23dd0ca466edc848ab9f75914f169da25ba7c3c7918e89f13ac53448e128d009" Feb 02 07:13:20 crc kubenswrapper[4842]: I0202 07:13:20.249693 4842 scope.go:117] "RemoveContainer" containerID="e8f9c804c29efb0cbd22bbe4d584e668c739a0efdfc614e0546bb32ea70ef867" Feb 02 07:13:20 crc kubenswrapper[4842]: I0202 07:13:20.274197 4842 scope.go:117] "RemoveContainer" containerID="022aa50ba41d0a413d49d7816b95c9ce705b40b44d3e4b26928051ada603decd" Feb 02 07:13:20 crc kubenswrapper[4842]: I0202 07:13:20.303279 4842 scope.go:117] "RemoveContainer" containerID="c593d09b2735487782551786767a4ed77fad095c2d0a78c5ed62f1b78de5ce7e" Feb 02 07:13:20 crc kubenswrapper[4842]: I0202 07:13:20.321999 4842 scope.go:117] "RemoveContainer" containerID="3a5cb3f49b99abe6192e05d777a57a2ec064de70a666aa2c8b933349f5030599" Feb 02 07:13:20 crc kubenswrapper[4842]: I0202 07:13:20.341901 4842 scope.go:117] "RemoveContainer" containerID="2c7088cf1821b77c6f7eefcfe1152002a124d024b112d220292c3bfdaf924d4c" Feb 02 07:13:20 crc kubenswrapper[4842]: I0202 07:13:20.366148 4842 scope.go:117] "RemoveContainer" containerID="adafd15daec92386baa24cf42bc0363f97b26ac9307e8e8272e537e2c7e8b2cf" Feb 02 07:13:20 crc kubenswrapper[4842]: I0202 07:13:20.424836 4842 scope.go:117] "RemoveContainer" containerID="72e60f391adc327a7666947b2251ee7da0c5b5a42927991c1ba5e739d160e596" Feb 02 07:13:20 crc kubenswrapper[4842]: I0202 07:13:20.449606 4842 scope.go:117] "RemoveContainer" containerID="50694d5591176c65770672c30837d60f3438d04ee3ca91b5bc53b0366f9835df" Feb 02 07:13:20 crc kubenswrapper[4842]: I0202 07:13:20.511316 4842 scope.go:117] "RemoveContainer" containerID="ca50f3bd514767840a56ccfe9f58d3e7f3e73682b97d7191a9419836cd607b01" Feb 02 07:13:20 crc kubenswrapper[4842]: I0202 07:13:20.539348 4842 scope.go:117] "RemoveContainer" containerID="baeb51b0b4bb9444bd98551a3cc3dcb68f182ab93c0b62223c4c0a0707790ceb" Feb 02 07:13:25 crc kubenswrapper[4842]: I0202 07:13:25.440898 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:13:25 crc kubenswrapper[4842]: E0202 07:13:25.441722 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:13:26 crc kubenswrapper[4842]: E0202 07:13:26.521774 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8eb678e_c4b4_4c94_ad98_b3327276614e.slice\": RecentStats: unable to find data in memory cache]" Feb 02 07:13:36 crc kubenswrapper[4842]: E0202 07:13:36.707451 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8eb678e_c4b4_4c94_ad98_b3327276614e.slice\": RecentStats: unable to find data in memory cache]" Feb 02 07:13:38 crc kubenswrapper[4842]: I0202 07:13:38.433719 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:13:38 crc kubenswrapper[4842]: E0202 07:13:38.434517 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:13:46 crc kubenswrapper[4842]: E0202 07:13:46.901110 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8eb678e_c4b4_4c94_ad98_b3327276614e.slice\": RecentStats: unable to find data in memory cache]" Feb 02 07:13:49 crc kubenswrapper[4842]: I0202 07:13:49.434102 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:13:49 crc kubenswrapper[4842]: E0202 07:13:49.436069 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:13:57 crc kubenswrapper[4842]: E0202 07:13:57.126553 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8eb678e_c4b4_4c94_ad98_b3327276614e.slice\": RecentStats: unable to find data in memory cache]" Feb 02 07:14:04 crc kubenswrapper[4842]: I0202 07:14:04.433904 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:14:04 crc kubenswrapper[4842]: E0202 07:14:04.435983 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:14:07 crc kubenswrapper[4842]: E0202 07:14:07.353494 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8eb678e_c4b4_4c94_ad98_b3327276614e.slice\": RecentStats: unable to find data in memory cache]" Feb 02 07:14:15 crc kubenswrapper[4842]: I0202 07:14:15.437805 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:14:15 crc kubenswrapper[4842]: E0202 07:14:15.438592 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:14:20 crc kubenswrapper[4842]: I0202 07:14:20.780980 4842 scope.go:117] "RemoveContainer" containerID="a176e8b4ea564bc302309fcba58a47b8e68f174edeb83a184476a852cc3c272e" Feb 02 07:14:20 crc kubenswrapper[4842]: I0202 07:14:20.812609 4842 scope.go:117] "RemoveContainer" containerID="55d824abd1b5b048d587e61fdc8db2106087cb9113bf5c22c3cc72f341861791" Feb 02 07:14:20 crc kubenswrapper[4842]: I0202 07:14:20.890060 4842 scope.go:117] "RemoveContainer" containerID="5f6dabb3b7c34feb5a2123ac9fa2eb87a3cf03a3caf3efd65fb72c179cb7cd52" Feb 02 07:14:20 crc kubenswrapper[4842]: I0202 07:14:20.906385 4842 scope.go:117] "RemoveContainer" containerID="2d911f330fb7cdc5064800cce65135b706e9f3cc93857bcb38ce5bd51f0bd398" Feb 02 07:14:28 crc kubenswrapper[4842]: I0202 07:14:28.434280 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:14:28 crc kubenswrapper[4842]: E0202 07:14:28.435532 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:14:40 crc kubenswrapper[4842]: I0202 07:14:40.434522 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:14:40 crc kubenswrapper[4842]: E0202 07:14:40.435837 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:14:53 crc kubenswrapper[4842]: I0202 07:14:53.852114 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:14:53 crc kubenswrapper[4842]: E0202 07:14:53.852961 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.165026 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb"] Feb 02 07:15:00 crc kubenswrapper[4842]: E0202 07:15:00.166284 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8eb678e-c4b4-4c94-ad98-b3327276614e" containerName="extract-content" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.166314 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8eb678e-c4b4-4c94-ad98-b3327276614e" containerName="extract-content" Feb 02 07:15:00 crc kubenswrapper[4842]: E0202 07:15:00.166333 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8eb678e-c4b4-4c94-ad98-b3327276614e" containerName="registry-server" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.166349 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8eb678e-c4b4-4c94-ad98-b3327276614e" containerName="registry-server" Feb 02 07:15:00 crc kubenswrapper[4842]: E0202 07:15:00.166382 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8eb678e-c4b4-4c94-ad98-b3327276614e" containerName="extract-utilities" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.166401 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8eb678e-c4b4-4c94-ad98-b3327276614e" containerName="extract-utilities" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.166710 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8eb678e-c4b4-4c94-ad98-b3327276614e" containerName="registry-server" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.167761 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.170460 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.171598 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.187609 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb"] Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.224508 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhnnd\" (UniqueName: \"kubernetes.io/projected/94334935-cf80-444c-b508-8c45e9780eee-kube-api-access-rhnnd\") pod \"collect-profiles-29500275-ts5jb\" (UID: \"94334935-cf80-444c-b508-8c45e9780eee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.224558 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94334935-cf80-444c-b508-8c45e9780eee-config-volume\") pod \"collect-profiles-29500275-ts5jb\" (UID: \"94334935-cf80-444c-b508-8c45e9780eee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.224630 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/94334935-cf80-444c-b508-8c45e9780eee-secret-volume\") pod \"collect-profiles-29500275-ts5jb\" (UID: \"94334935-cf80-444c-b508-8c45e9780eee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.325661 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhnnd\" (UniqueName: \"kubernetes.io/projected/94334935-cf80-444c-b508-8c45e9780eee-kube-api-access-rhnnd\") pod \"collect-profiles-29500275-ts5jb\" (UID: \"94334935-cf80-444c-b508-8c45e9780eee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.325712 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94334935-cf80-444c-b508-8c45e9780eee-config-volume\") pod \"collect-profiles-29500275-ts5jb\" (UID: \"94334935-cf80-444c-b508-8c45e9780eee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.325771 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/94334935-cf80-444c-b508-8c45e9780eee-secret-volume\") pod \"collect-profiles-29500275-ts5jb\" (UID: \"94334935-cf80-444c-b508-8c45e9780eee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.326850 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94334935-cf80-444c-b508-8c45e9780eee-config-volume\") pod \"collect-profiles-29500275-ts5jb\" (UID: \"94334935-cf80-444c-b508-8c45e9780eee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.334655 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/94334935-cf80-444c-b508-8c45e9780eee-secret-volume\") pod \"collect-profiles-29500275-ts5jb\" (UID: \"94334935-cf80-444c-b508-8c45e9780eee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.356449 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhnnd\" (UniqueName: \"kubernetes.io/projected/94334935-cf80-444c-b508-8c45e9780eee-kube-api-access-rhnnd\") pod \"collect-profiles-29500275-ts5jb\" (UID: \"94334935-cf80-444c-b508-8c45e9780eee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.529853 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.782821 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb"] Feb 02 07:15:01 crc kubenswrapper[4842]: I0202 07:15:01.468834 4842 generic.go:334] "Generic (PLEG): container finished" podID="94334935-cf80-444c-b508-8c45e9780eee" containerID="3ec04990d6c97adea2fe95dabf427fb8df7522b562c84dbbcac33e51d0d54b26" exitCode=0 Feb 02 07:15:01 crc kubenswrapper[4842]: I0202 07:15:01.469069 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb" event={"ID":"94334935-cf80-444c-b508-8c45e9780eee","Type":"ContainerDied","Data":"3ec04990d6c97adea2fe95dabf427fb8df7522b562c84dbbcac33e51d0d54b26"} Feb 02 07:15:01 crc kubenswrapper[4842]: I0202 07:15:01.469253 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb" event={"ID":"94334935-cf80-444c-b508-8c45e9780eee","Type":"ContainerStarted","Data":"ffd9b2a09b1899cc128dde5a3fdc164f53315d8e11ae540afa00b51d8d3daceb"} Feb 02 07:15:02 crc kubenswrapper[4842]: I0202 07:15:02.790419 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb" Feb 02 07:15:02 crc kubenswrapper[4842]: I0202 07:15:02.871456 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/94334935-cf80-444c-b508-8c45e9780eee-secret-volume\") pod \"94334935-cf80-444c-b508-8c45e9780eee\" (UID: \"94334935-cf80-444c-b508-8c45e9780eee\") " Feb 02 07:15:02 crc kubenswrapper[4842]: I0202 07:15:02.871612 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94334935-cf80-444c-b508-8c45e9780eee-config-volume\") pod \"94334935-cf80-444c-b508-8c45e9780eee\" (UID: \"94334935-cf80-444c-b508-8c45e9780eee\") " Feb 02 07:15:02 crc kubenswrapper[4842]: I0202 07:15:02.871663 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhnnd\" (UniqueName: \"kubernetes.io/projected/94334935-cf80-444c-b508-8c45e9780eee-kube-api-access-rhnnd\") pod \"94334935-cf80-444c-b508-8c45e9780eee\" (UID: \"94334935-cf80-444c-b508-8c45e9780eee\") " Feb 02 07:15:02 crc kubenswrapper[4842]: I0202 07:15:02.874254 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94334935-cf80-444c-b508-8c45e9780eee-config-volume" (OuterVolumeSpecName: "config-volume") pod "94334935-cf80-444c-b508-8c45e9780eee" (UID: "94334935-cf80-444c-b508-8c45e9780eee"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:15:02 crc kubenswrapper[4842]: I0202 07:15:02.877356 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94334935-cf80-444c-b508-8c45e9780eee-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "94334935-cf80-444c-b508-8c45e9780eee" (UID: "94334935-cf80-444c-b508-8c45e9780eee"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:15:02 crc kubenswrapper[4842]: I0202 07:15:02.878388 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94334935-cf80-444c-b508-8c45e9780eee-kube-api-access-rhnnd" (OuterVolumeSpecName: "kube-api-access-rhnnd") pod "94334935-cf80-444c-b508-8c45e9780eee" (UID: "94334935-cf80-444c-b508-8c45e9780eee"). InnerVolumeSpecName "kube-api-access-rhnnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:15:02 crc kubenswrapper[4842]: I0202 07:15:02.973503 4842 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94334935-cf80-444c-b508-8c45e9780eee-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 07:15:02 crc kubenswrapper[4842]: I0202 07:15:02.973564 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhnnd\" (UniqueName: \"kubernetes.io/projected/94334935-cf80-444c-b508-8c45e9780eee-kube-api-access-rhnnd\") on node \"crc\" DevicePath \"\"" Feb 02 07:15:02 crc kubenswrapper[4842]: I0202 07:15:02.973586 4842 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/94334935-cf80-444c-b508-8c45e9780eee-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 07:15:03 crc kubenswrapper[4842]: I0202 07:15:03.505502 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb" event={"ID":"94334935-cf80-444c-b508-8c45e9780eee","Type":"ContainerDied","Data":"ffd9b2a09b1899cc128dde5a3fdc164f53315d8e11ae540afa00b51d8d3daceb"} Feb 02 07:15:03 crc kubenswrapper[4842]: I0202 07:15:03.505572 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffd9b2a09b1899cc128dde5a3fdc164f53315d8e11ae540afa00b51d8d3daceb" Feb 02 07:15:03 crc kubenswrapper[4842]: I0202 07:15:03.505920 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb" Feb 02 07:15:04 crc kubenswrapper[4842]: I0202 07:15:04.434261 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:15:04 crc kubenswrapper[4842]: E0202 07:15:04.436010 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:15:16 crc kubenswrapper[4842]: I0202 07:15:16.434532 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:15:16 crc kubenswrapper[4842]: E0202 07:15:16.436064 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:15:20 crc kubenswrapper[4842]: I0202 07:15:20.995933 4842 scope.go:117] "RemoveContainer" containerID="1f08602808f0c1da9b996db624f132bc20c5b91004db8c9c6f2ffa67741d3bbc" Feb 02 07:15:21 crc kubenswrapper[4842]: I0202 07:15:21.025892 4842 scope.go:117] "RemoveContainer" containerID="bebe8c74ad90a2dc028ad9e30942ced9f67c8af8df16026b5b89379d97e80e00" Feb 02 07:15:21 crc kubenswrapper[4842]: I0202 07:15:21.058897 4842 scope.go:117] "RemoveContainer" containerID="999eacbb47149d7ff50ad4df7698189fd41e6e1be3e25e8c83a58d8439abc53c" Feb 02 07:15:28 crc kubenswrapper[4842]: I0202 07:15:28.434321 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:15:28 crc kubenswrapper[4842]: E0202 07:15:28.435660 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:15:41 crc kubenswrapper[4842]: I0202 07:15:41.434155 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:15:41 crc kubenswrapper[4842]: E0202 07:15:41.435104 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:15:53 crc kubenswrapper[4842]: I0202 07:15:53.434721 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:15:53 crc kubenswrapper[4842]: I0202 07:15:53.972357 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"ef3633cb81ad43f5900bb09958d1b9db8e2996aefec6cb08cbd8f8a8c4976bb1"} Feb 02 07:18:12 crc kubenswrapper[4842]: I0202 07:18:12.146517 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:18:12 crc kubenswrapper[4842]: I0202 07:18:12.147189 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:18:42 crc kubenswrapper[4842]: I0202 07:18:42.146692 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:18:42 crc kubenswrapper[4842]: I0202 07:18:42.147734 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:19:12 crc kubenswrapper[4842]: I0202 07:19:12.146705 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:19:12 crc kubenswrapper[4842]: I0202 07:19:12.148302 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:19:12 crc kubenswrapper[4842]: I0202 07:19:12.148457 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 07:19:12 crc kubenswrapper[4842]: I0202 07:19:12.149129 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ef3633cb81ad43f5900bb09958d1b9db8e2996aefec6cb08cbd8f8a8c4976bb1"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 07:19:12 crc kubenswrapper[4842]: I0202 07:19:12.149320 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://ef3633cb81ad43f5900bb09958d1b9db8e2996aefec6cb08cbd8f8a8c4976bb1" gracePeriod=600 Feb 02 07:19:12 crc kubenswrapper[4842]: I0202 07:19:12.859809 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="ef3633cb81ad43f5900bb09958d1b9db8e2996aefec6cb08cbd8f8a8c4976bb1" exitCode=0 Feb 02 07:19:12 crc kubenswrapper[4842]: I0202 07:19:12.859905 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"ef3633cb81ad43f5900bb09958d1b9db8e2996aefec6cb08cbd8f8a8c4976bb1"} Feb 02 07:19:12 crc kubenswrapper[4842]: I0202 07:19:12.860404 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6"} Feb 02 07:19:12 crc kubenswrapper[4842]: I0202 07:19:12.860422 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:19:53 crc kubenswrapper[4842]: I0202 07:19:53.449764 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6fs69"] Feb 02 07:19:53 crc kubenswrapper[4842]: E0202 07:19:53.451056 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94334935-cf80-444c-b508-8c45e9780eee" containerName="collect-profiles" Feb 02 07:19:53 crc kubenswrapper[4842]: I0202 07:19:53.451077 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="94334935-cf80-444c-b508-8c45e9780eee" containerName="collect-profiles" Feb 02 07:19:53 crc kubenswrapper[4842]: I0202 07:19:53.451337 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="94334935-cf80-444c-b508-8c45e9780eee" containerName="collect-profiles" Feb 02 07:19:53 crc kubenswrapper[4842]: I0202 07:19:53.452905 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:19:53 crc kubenswrapper[4842]: I0202 07:19:53.486141 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6fs69"] Feb 02 07:19:53 crc kubenswrapper[4842]: I0202 07:19:53.565121 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca00d8b2-3728-456f-bf49-285fb31385ef-catalog-content\") pod \"redhat-operators-6fs69\" (UID: \"ca00d8b2-3728-456f-bf49-285fb31385ef\") " pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:19:53 crc kubenswrapper[4842]: I0202 07:19:53.565181 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca00d8b2-3728-456f-bf49-285fb31385ef-utilities\") pod \"redhat-operators-6fs69\" (UID: \"ca00d8b2-3728-456f-bf49-285fb31385ef\") " pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:19:53 crc kubenswrapper[4842]: I0202 07:19:53.565563 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6v85\" (UniqueName: \"kubernetes.io/projected/ca00d8b2-3728-456f-bf49-285fb31385ef-kube-api-access-n6v85\") pod \"redhat-operators-6fs69\" (UID: \"ca00d8b2-3728-456f-bf49-285fb31385ef\") " pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:19:53 crc kubenswrapper[4842]: I0202 07:19:53.666411 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca00d8b2-3728-456f-bf49-285fb31385ef-catalog-content\") pod \"redhat-operators-6fs69\" (UID: \"ca00d8b2-3728-456f-bf49-285fb31385ef\") " pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:19:53 crc kubenswrapper[4842]: I0202 07:19:53.666460 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca00d8b2-3728-456f-bf49-285fb31385ef-utilities\") pod \"redhat-operators-6fs69\" (UID: \"ca00d8b2-3728-456f-bf49-285fb31385ef\") " pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:19:53 crc kubenswrapper[4842]: I0202 07:19:53.666509 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6v85\" (UniqueName: \"kubernetes.io/projected/ca00d8b2-3728-456f-bf49-285fb31385ef-kube-api-access-n6v85\") pod \"redhat-operators-6fs69\" (UID: \"ca00d8b2-3728-456f-bf49-285fb31385ef\") " pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:19:53 crc kubenswrapper[4842]: I0202 07:19:53.666951 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca00d8b2-3728-456f-bf49-285fb31385ef-catalog-content\") pod \"redhat-operators-6fs69\" (UID: \"ca00d8b2-3728-456f-bf49-285fb31385ef\") " pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:19:53 crc kubenswrapper[4842]: I0202 07:19:53.667017 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca00d8b2-3728-456f-bf49-285fb31385ef-utilities\") pod \"redhat-operators-6fs69\" (UID: \"ca00d8b2-3728-456f-bf49-285fb31385ef\") " pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:19:53 crc kubenswrapper[4842]: I0202 07:19:53.693236 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6v85\" (UniqueName: \"kubernetes.io/projected/ca00d8b2-3728-456f-bf49-285fb31385ef-kube-api-access-n6v85\") pod \"redhat-operators-6fs69\" (UID: \"ca00d8b2-3728-456f-bf49-285fb31385ef\") " pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:19:53 crc kubenswrapper[4842]: I0202 07:19:53.778449 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:19:54 crc kubenswrapper[4842]: I0202 07:19:54.023675 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6fs69"] Feb 02 07:19:54 crc kubenswrapper[4842]: I0202 07:19:54.245325 4842 generic.go:334] "Generic (PLEG): container finished" podID="ca00d8b2-3728-456f-bf49-285fb31385ef" containerID="54d29c0b963abf2e6cbe9930fdfb039211d0f6d3757608dff7e813a74402f5e9" exitCode=0 Feb 02 07:19:54 crc kubenswrapper[4842]: I0202 07:19:54.245436 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fs69" event={"ID":"ca00d8b2-3728-456f-bf49-285fb31385ef","Type":"ContainerDied","Data":"54d29c0b963abf2e6cbe9930fdfb039211d0f6d3757608dff7e813a74402f5e9"} Feb 02 07:19:54 crc kubenswrapper[4842]: I0202 07:19:54.245724 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fs69" event={"ID":"ca00d8b2-3728-456f-bf49-285fb31385ef","Type":"ContainerStarted","Data":"65e76528169fb677cd540320202422b2280b76074014ea15d85e95ebce1f9e4b"} Feb 02 07:19:54 crc kubenswrapper[4842]: I0202 07:19:54.247279 4842 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 07:19:56 crc kubenswrapper[4842]: I0202 07:19:56.268773 4842 generic.go:334] "Generic (PLEG): container finished" podID="ca00d8b2-3728-456f-bf49-285fb31385ef" containerID="15961ca3966c5e19bf382f4ff38a45f3b4f496271c3a403b37983001d2953ade" exitCode=0 Feb 02 07:19:56 crc kubenswrapper[4842]: I0202 07:19:56.269179 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fs69" event={"ID":"ca00d8b2-3728-456f-bf49-285fb31385ef","Type":"ContainerDied","Data":"15961ca3966c5e19bf382f4ff38a45f3b4f496271c3a403b37983001d2953ade"} Feb 02 07:19:57 crc kubenswrapper[4842]: I0202 07:19:57.287687 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fs69" event={"ID":"ca00d8b2-3728-456f-bf49-285fb31385ef","Type":"ContainerStarted","Data":"26b03de8273eeb8c731faea10ebe84f0a97c933934818912e8d4605f3c713f26"} Feb 02 07:19:57 crc kubenswrapper[4842]: I0202 07:19:57.316617 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6fs69" podStartSLOduration=1.779264986 podStartE2EDuration="4.316593607s" podCreationTimestamp="2026-02-02 07:19:53 +0000 UTC" firstStartedPulling="2026-02-02 07:19:54.246942791 +0000 UTC m=+2019.624210713" lastFinishedPulling="2026-02-02 07:19:56.784271382 +0000 UTC m=+2022.161539334" observedRunningTime="2026-02-02 07:19:57.315572032 +0000 UTC m=+2022.692839954" watchObservedRunningTime="2026-02-02 07:19:57.316593607 +0000 UTC m=+2022.693861559" Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.091400 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-72zmj"] Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.094901 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.114896 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-72zmj"] Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.197976 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/453006f5-8304-47d9-b9d8-a4cc69692dcc-catalog-content\") pod \"certified-operators-72zmj\" (UID: \"453006f5-8304-47d9-b9d8-a4cc69692dcc\") " pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.198061 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/453006f5-8304-47d9-b9d8-a4cc69692dcc-utilities\") pod \"certified-operators-72zmj\" (UID: \"453006f5-8304-47d9-b9d8-a4cc69692dcc\") " pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.198200 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw8gb\" (UniqueName: \"kubernetes.io/projected/453006f5-8304-47d9-b9d8-a4cc69692dcc-kube-api-access-fw8gb\") pod \"certified-operators-72zmj\" (UID: \"453006f5-8304-47d9-b9d8-a4cc69692dcc\") " pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.299243 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/453006f5-8304-47d9-b9d8-a4cc69692dcc-utilities\") pod \"certified-operators-72zmj\" (UID: \"453006f5-8304-47d9-b9d8-a4cc69692dcc\") " pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.299545 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw8gb\" (UniqueName: \"kubernetes.io/projected/453006f5-8304-47d9-b9d8-a4cc69692dcc-kube-api-access-fw8gb\") pod \"certified-operators-72zmj\" (UID: \"453006f5-8304-47d9-b9d8-a4cc69692dcc\") " pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.299624 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/453006f5-8304-47d9-b9d8-a4cc69692dcc-catalog-content\") pod \"certified-operators-72zmj\" (UID: \"453006f5-8304-47d9-b9d8-a4cc69692dcc\") " pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.299798 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/453006f5-8304-47d9-b9d8-a4cc69692dcc-utilities\") pod \"certified-operators-72zmj\" (UID: \"453006f5-8304-47d9-b9d8-a4cc69692dcc\") " pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.300043 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/453006f5-8304-47d9-b9d8-a4cc69692dcc-catalog-content\") pod \"certified-operators-72zmj\" (UID: \"453006f5-8304-47d9-b9d8-a4cc69692dcc\") " pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.322779 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw8gb\" (UniqueName: \"kubernetes.io/projected/453006f5-8304-47d9-b9d8-a4cc69692dcc-kube-api-access-fw8gb\") pod \"certified-operators-72zmj\" (UID: \"453006f5-8304-47d9-b9d8-a4cc69692dcc\") " pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.416665 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.779017 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.779358 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.828474 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.855999 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-72zmj"] Feb 02 07:20:04 crc kubenswrapper[4842]: I0202 07:20:04.341185 4842 generic.go:334] "Generic (PLEG): container finished" podID="453006f5-8304-47d9-b9d8-a4cc69692dcc" containerID="d78e9f3b704db0554e9e8735957f8db808c99b02e8c5ba30de44d2ef460d9d6d" exitCode=0 Feb 02 07:20:04 crc kubenswrapper[4842]: I0202 07:20:04.341303 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-72zmj" event={"ID":"453006f5-8304-47d9-b9d8-a4cc69692dcc","Type":"ContainerDied","Data":"d78e9f3b704db0554e9e8735957f8db808c99b02e8c5ba30de44d2ef460d9d6d"} Feb 02 07:20:04 crc kubenswrapper[4842]: I0202 07:20:04.341683 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-72zmj" event={"ID":"453006f5-8304-47d9-b9d8-a4cc69692dcc","Type":"ContainerStarted","Data":"e260a9f7d75fb075cee2c831326054895c9434e215ba53e7ea6103746c73ba81"} Feb 02 07:20:04 crc kubenswrapper[4842]: I0202 07:20:04.416494 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:20:05 crc kubenswrapper[4842]: I0202 07:20:05.358484 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-72zmj" event={"ID":"453006f5-8304-47d9-b9d8-a4cc69692dcc","Type":"ContainerStarted","Data":"a30f59cb2cd06e957321cd08a9474ce66d57c65316dd219d721c8ca2a454864d"} Feb 02 07:20:05 crc kubenswrapper[4842]: I0202 07:20:05.480993 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6fs69"] Feb 02 07:20:06 crc kubenswrapper[4842]: I0202 07:20:06.369311 4842 generic.go:334] "Generic (PLEG): container finished" podID="453006f5-8304-47d9-b9d8-a4cc69692dcc" containerID="a30f59cb2cd06e957321cd08a9474ce66d57c65316dd219d721c8ca2a454864d" exitCode=0 Feb 02 07:20:06 crc kubenswrapper[4842]: I0202 07:20:06.369573 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6fs69" podUID="ca00d8b2-3728-456f-bf49-285fb31385ef" containerName="registry-server" containerID="cri-o://26b03de8273eeb8c731faea10ebe84f0a97c933934818912e8d4605f3c713f26" gracePeriod=2 Feb 02 07:20:06 crc kubenswrapper[4842]: I0202 07:20:06.370834 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-72zmj" event={"ID":"453006f5-8304-47d9-b9d8-a4cc69692dcc","Type":"ContainerDied","Data":"a30f59cb2cd06e957321cd08a9474ce66d57c65316dd219d721c8ca2a454864d"} Feb 02 07:20:07 crc kubenswrapper[4842]: I0202 07:20:07.379037 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-72zmj" event={"ID":"453006f5-8304-47d9-b9d8-a4cc69692dcc","Type":"ContainerStarted","Data":"05bbfa79e1de8510be3fed9eb02652d77961d7b89399c8f90bfd82bd6e6f6e1c"} Feb 02 07:20:07 crc kubenswrapper[4842]: I0202 07:20:07.383934 4842 generic.go:334] "Generic (PLEG): container finished" podID="ca00d8b2-3728-456f-bf49-285fb31385ef" containerID="26b03de8273eeb8c731faea10ebe84f0a97c933934818912e8d4605f3c713f26" exitCode=0 Feb 02 07:20:07 crc kubenswrapper[4842]: I0202 07:20:07.384009 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fs69" event={"ID":"ca00d8b2-3728-456f-bf49-285fb31385ef","Type":"ContainerDied","Data":"26b03de8273eeb8c731faea10ebe84f0a97c933934818912e8d4605f3c713f26"} Feb 02 07:20:07 crc kubenswrapper[4842]: I0202 07:20:07.384040 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fs69" event={"ID":"ca00d8b2-3728-456f-bf49-285fb31385ef","Type":"ContainerDied","Data":"65e76528169fb677cd540320202422b2280b76074014ea15d85e95ebce1f9e4b"} Feb 02 07:20:07 crc kubenswrapper[4842]: I0202 07:20:07.384081 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65e76528169fb677cd540320202422b2280b76074014ea15d85e95ebce1f9e4b" Feb 02 07:20:07 crc kubenswrapper[4842]: I0202 07:20:07.405184 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-72zmj" podStartSLOduration=1.899990096 podStartE2EDuration="4.405165764s" podCreationTimestamp="2026-02-02 07:20:03 +0000 UTC" firstStartedPulling="2026-02-02 07:20:04.343553756 +0000 UTC m=+2029.720821708" lastFinishedPulling="2026-02-02 07:20:06.848729444 +0000 UTC m=+2032.225997376" observedRunningTime="2026-02-02 07:20:07.398478628 +0000 UTC m=+2032.775746550" watchObservedRunningTime="2026-02-02 07:20:07.405165764 +0000 UTC m=+2032.782433686" Feb 02 07:20:07 crc kubenswrapper[4842]: I0202 07:20:07.406929 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:20:07 crc kubenswrapper[4842]: I0202 07:20:07.568889 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca00d8b2-3728-456f-bf49-285fb31385ef-utilities\") pod \"ca00d8b2-3728-456f-bf49-285fb31385ef\" (UID: \"ca00d8b2-3728-456f-bf49-285fb31385ef\") " Feb 02 07:20:07 crc kubenswrapper[4842]: I0202 07:20:07.569062 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca00d8b2-3728-456f-bf49-285fb31385ef-catalog-content\") pod \"ca00d8b2-3728-456f-bf49-285fb31385ef\" (UID: \"ca00d8b2-3728-456f-bf49-285fb31385ef\") " Feb 02 07:20:07 crc kubenswrapper[4842]: I0202 07:20:07.569144 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6v85\" (UniqueName: \"kubernetes.io/projected/ca00d8b2-3728-456f-bf49-285fb31385ef-kube-api-access-n6v85\") pod \"ca00d8b2-3728-456f-bf49-285fb31385ef\" (UID: \"ca00d8b2-3728-456f-bf49-285fb31385ef\") " Feb 02 07:20:07 crc kubenswrapper[4842]: I0202 07:20:07.570284 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca00d8b2-3728-456f-bf49-285fb31385ef-utilities" (OuterVolumeSpecName: "utilities") pod "ca00d8b2-3728-456f-bf49-285fb31385ef" (UID: "ca00d8b2-3728-456f-bf49-285fb31385ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:20:07 crc kubenswrapper[4842]: I0202 07:20:07.575542 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca00d8b2-3728-456f-bf49-285fb31385ef-kube-api-access-n6v85" (OuterVolumeSpecName: "kube-api-access-n6v85") pod "ca00d8b2-3728-456f-bf49-285fb31385ef" (UID: "ca00d8b2-3728-456f-bf49-285fb31385ef"). InnerVolumeSpecName "kube-api-access-n6v85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:20:07 crc kubenswrapper[4842]: I0202 07:20:07.671706 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6v85\" (UniqueName: \"kubernetes.io/projected/ca00d8b2-3728-456f-bf49-285fb31385ef-kube-api-access-n6v85\") on node \"crc\" DevicePath \"\"" Feb 02 07:20:07 crc kubenswrapper[4842]: I0202 07:20:07.672070 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca00d8b2-3728-456f-bf49-285fb31385ef-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:20:07 crc kubenswrapper[4842]: I0202 07:20:07.756296 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca00d8b2-3728-456f-bf49-285fb31385ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca00d8b2-3728-456f-bf49-285fb31385ef" (UID: "ca00d8b2-3728-456f-bf49-285fb31385ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:20:07 crc kubenswrapper[4842]: I0202 07:20:07.773551 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca00d8b2-3728-456f-bf49-285fb31385ef-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:20:08 crc kubenswrapper[4842]: I0202 07:20:08.393989 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:20:08 crc kubenswrapper[4842]: I0202 07:20:08.448615 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6fs69"] Feb 02 07:20:08 crc kubenswrapper[4842]: I0202 07:20:08.459008 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6fs69"] Feb 02 07:20:09 crc kubenswrapper[4842]: I0202 07:20:09.449364 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca00d8b2-3728-456f-bf49-285fb31385ef" path="/var/lib/kubelet/pods/ca00d8b2-3728-456f-bf49-285fb31385ef/volumes" Feb 02 07:20:13 crc kubenswrapper[4842]: I0202 07:20:13.417424 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:13 crc kubenswrapper[4842]: I0202 07:20:13.417848 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:13 crc kubenswrapper[4842]: I0202 07:20:13.498357 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:13 crc kubenswrapper[4842]: I0202 07:20:13.574669 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:13 crc kubenswrapper[4842]: I0202 07:20:13.749127 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-72zmj"] Feb 02 07:20:15 crc kubenswrapper[4842]: I0202 07:20:15.458625 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-72zmj" podUID="453006f5-8304-47d9-b9d8-a4cc69692dcc" containerName="registry-server" containerID="cri-o://05bbfa79e1de8510be3fed9eb02652d77961d7b89399c8f90bfd82bd6e6f6e1c" gracePeriod=2 Feb 02 07:20:15 crc kubenswrapper[4842]: I0202 07:20:15.967832 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.109371 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw8gb\" (UniqueName: \"kubernetes.io/projected/453006f5-8304-47d9-b9d8-a4cc69692dcc-kube-api-access-fw8gb\") pod \"453006f5-8304-47d9-b9d8-a4cc69692dcc\" (UID: \"453006f5-8304-47d9-b9d8-a4cc69692dcc\") " Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.109568 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/453006f5-8304-47d9-b9d8-a4cc69692dcc-utilities\") pod \"453006f5-8304-47d9-b9d8-a4cc69692dcc\" (UID: \"453006f5-8304-47d9-b9d8-a4cc69692dcc\") " Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.109664 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/453006f5-8304-47d9-b9d8-a4cc69692dcc-catalog-content\") pod \"453006f5-8304-47d9-b9d8-a4cc69692dcc\" (UID: \"453006f5-8304-47d9-b9d8-a4cc69692dcc\") " Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.111610 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/453006f5-8304-47d9-b9d8-a4cc69692dcc-utilities" (OuterVolumeSpecName: "utilities") pod "453006f5-8304-47d9-b9d8-a4cc69692dcc" (UID: "453006f5-8304-47d9-b9d8-a4cc69692dcc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.123612 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/453006f5-8304-47d9-b9d8-a4cc69692dcc-kube-api-access-fw8gb" (OuterVolumeSpecName: "kube-api-access-fw8gb") pod "453006f5-8304-47d9-b9d8-a4cc69692dcc" (UID: "453006f5-8304-47d9-b9d8-a4cc69692dcc"). InnerVolumeSpecName "kube-api-access-fw8gb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.184478 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/453006f5-8304-47d9-b9d8-a4cc69692dcc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "453006f5-8304-47d9-b9d8-a4cc69692dcc" (UID: "453006f5-8304-47d9-b9d8-a4cc69692dcc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.211148 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/453006f5-8304-47d9-b9d8-a4cc69692dcc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.211179 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw8gb\" (UniqueName: \"kubernetes.io/projected/453006f5-8304-47d9-b9d8-a4cc69692dcc-kube-api-access-fw8gb\") on node \"crc\" DevicePath \"\"" Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.211208 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/453006f5-8304-47d9-b9d8-a4cc69692dcc-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.472263 4842 generic.go:334] "Generic (PLEG): container finished" podID="453006f5-8304-47d9-b9d8-a4cc69692dcc" containerID="05bbfa79e1de8510be3fed9eb02652d77961d7b89399c8f90bfd82bd6e6f6e1c" exitCode=0 Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.472347 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-72zmj" event={"ID":"453006f5-8304-47d9-b9d8-a4cc69692dcc","Type":"ContainerDied","Data":"05bbfa79e1de8510be3fed9eb02652d77961d7b89399c8f90bfd82bd6e6f6e1c"} Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.472359 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.472411 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-72zmj" event={"ID":"453006f5-8304-47d9-b9d8-a4cc69692dcc","Type":"ContainerDied","Data":"e260a9f7d75fb075cee2c831326054895c9434e215ba53e7ea6103746c73ba81"} Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.472451 4842 scope.go:117] "RemoveContainer" containerID="05bbfa79e1de8510be3fed9eb02652d77961d7b89399c8f90bfd82bd6e6f6e1c" Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.508306 4842 scope.go:117] "RemoveContainer" containerID="a30f59cb2cd06e957321cd08a9474ce66d57c65316dd219d721c8ca2a454864d" Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.509053 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-72zmj"] Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.513935 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-72zmj"] Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.534881 4842 scope.go:117] "RemoveContainer" containerID="d78e9f3b704db0554e9e8735957f8db808c99b02e8c5ba30de44d2ef460d9d6d" Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.565836 4842 scope.go:117] "RemoveContainer" containerID="05bbfa79e1de8510be3fed9eb02652d77961d7b89399c8f90bfd82bd6e6f6e1c" Feb 02 07:20:16 crc kubenswrapper[4842]: E0202 07:20:16.566539 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05bbfa79e1de8510be3fed9eb02652d77961d7b89399c8f90bfd82bd6e6f6e1c\": container with ID starting with 05bbfa79e1de8510be3fed9eb02652d77961d7b89399c8f90bfd82bd6e6f6e1c not found: ID does not exist" containerID="05bbfa79e1de8510be3fed9eb02652d77961d7b89399c8f90bfd82bd6e6f6e1c" Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.566619 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05bbfa79e1de8510be3fed9eb02652d77961d7b89399c8f90bfd82bd6e6f6e1c"} err="failed to get container status \"05bbfa79e1de8510be3fed9eb02652d77961d7b89399c8f90bfd82bd6e6f6e1c\": rpc error: code = NotFound desc = could not find container \"05bbfa79e1de8510be3fed9eb02652d77961d7b89399c8f90bfd82bd6e6f6e1c\": container with ID starting with 05bbfa79e1de8510be3fed9eb02652d77961d7b89399c8f90bfd82bd6e6f6e1c not found: ID does not exist" Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.566663 4842 scope.go:117] "RemoveContainer" containerID="a30f59cb2cd06e957321cd08a9474ce66d57c65316dd219d721c8ca2a454864d" Feb 02 07:20:16 crc kubenswrapper[4842]: E0202 07:20:16.567281 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a30f59cb2cd06e957321cd08a9474ce66d57c65316dd219d721c8ca2a454864d\": container with ID starting with a30f59cb2cd06e957321cd08a9474ce66d57c65316dd219d721c8ca2a454864d not found: ID does not exist" containerID="a30f59cb2cd06e957321cd08a9474ce66d57c65316dd219d721c8ca2a454864d" Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.567335 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a30f59cb2cd06e957321cd08a9474ce66d57c65316dd219d721c8ca2a454864d"} err="failed to get container status \"a30f59cb2cd06e957321cd08a9474ce66d57c65316dd219d721c8ca2a454864d\": rpc error: code = NotFound desc = could not find container \"a30f59cb2cd06e957321cd08a9474ce66d57c65316dd219d721c8ca2a454864d\": container with ID starting with a30f59cb2cd06e957321cd08a9474ce66d57c65316dd219d721c8ca2a454864d not found: ID does not exist" Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.567367 4842 scope.go:117] "RemoveContainer" containerID="d78e9f3b704db0554e9e8735957f8db808c99b02e8c5ba30de44d2ef460d9d6d" Feb 02 07:20:16 crc kubenswrapper[4842]: E0202 07:20:16.568115 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d78e9f3b704db0554e9e8735957f8db808c99b02e8c5ba30de44d2ef460d9d6d\": container with ID starting with d78e9f3b704db0554e9e8735957f8db808c99b02e8c5ba30de44d2ef460d9d6d not found: ID does not exist" containerID="d78e9f3b704db0554e9e8735957f8db808c99b02e8c5ba30de44d2ef460d9d6d" Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.568176 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d78e9f3b704db0554e9e8735957f8db808c99b02e8c5ba30de44d2ef460d9d6d"} err="failed to get container status \"d78e9f3b704db0554e9e8735957f8db808c99b02e8c5ba30de44d2ef460d9d6d\": rpc error: code = NotFound desc = could not find container \"d78e9f3b704db0554e9e8735957f8db808c99b02e8c5ba30de44d2ef460d9d6d\": container with ID starting with d78e9f3b704db0554e9e8735957f8db808c99b02e8c5ba30de44d2ef460d9d6d not found: ID does not exist" Feb 02 07:20:17 crc kubenswrapper[4842]: I0202 07:20:17.452721 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="453006f5-8304-47d9-b9d8-a4cc69692dcc" path="/var/lib/kubelet/pods/453006f5-8304-47d9-b9d8-a4cc69692dcc/volumes" Feb 02 07:21:12 crc kubenswrapper[4842]: I0202 07:21:12.146723 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:21:12 crc kubenswrapper[4842]: I0202 07:21:12.147427 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:21:42 crc kubenswrapper[4842]: I0202 07:21:42.146688 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:21:42 crc kubenswrapper[4842]: I0202 07:21:42.147554 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:22:12 crc kubenswrapper[4842]: I0202 07:22:12.145959 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:22:12 crc kubenswrapper[4842]: I0202 07:22:12.146870 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:22:12 crc kubenswrapper[4842]: I0202 07:22:12.146956 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 07:22:12 crc kubenswrapper[4842]: I0202 07:22:12.147904 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 07:22:12 crc kubenswrapper[4842]: I0202 07:22:12.148033 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" gracePeriod=600 Feb 02 07:22:12 crc kubenswrapper[4842]: E0202 07:22:12.302588 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:22:12 crc kubenswrapper[4842]: I0202 07:22:12.592455 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" exitCode=0 Feb 02 07:22:12 crc kubenswrapper[4842]: I0202 07:22:12.592528 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6"} Feb 02 07:22:12 crc kubenswrapper[4842]: I0202 07:22:12.592598 4842 scope.go:117] "RemoveContainer" containerID="ef3633cb81ad43f5900bb09958d1b9db8e2996aefec6cb08cbd8f8a8c4976bb1" Feb 02 07:22:12 crc kubenswrapper[4842]: I0202 07:22:12.593110 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:22:12 crc kubenswrapper[4842]: E0202 07:22:12.593487 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:22:25 crc kubenswrapper[4842]: I0202 07:22:25.441690 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:22:25 crc kubenswrapper[4842]: E0202 07:22:25.442997 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:22:37 crc kubenswrapper[4842]: I0202 07:22:37.435053 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:22:37 crc kubenswrapper[4842]: E0202 07:22:37.436143 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:22:52 crc kubenswrapper[4842]: I0202 07:22:52.434073 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:22:52 crc kubenswrapper[4842]: E0202 07:22:52.434751 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.722119 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5vntr"] Feb 02 07:22:59 crc kubenswrapper[4842]: E0202 07:22:59.723291 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="453006f5-8304-47d9-b9d8-a4cc69692dcc" containerName="extract-content" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.723313 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="453006f5-8304-47d9-b9d8-a4cc69692dcc" containerName="extract-content" Feb 02 07:22:59 crc kubenswrapper[4842]: E0202 07:22:59.723336 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca00d8b2-3728-456f-bf49-285fb31385ef" containerName="extract-utilities" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.723350 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca00d8b2-3728-456f-bf49-285fb31385ef" containerName="extract-utilities" Feb 02 07:22:59 crc kubenswrapper[4842]: E0202 07:22:59.723366 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca00d8b2-3728-456f-bf49-285fb31385ef" containerName="registry-server" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.723379 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca00d8b2-3728-456f-bf49-285fb31385ef" containerName="registry-server" Feb 02 07:22:59 crc kubenswrapper[4842]: E0202 07:22:59.723409 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="453006f5-8304-47d9-b9d8-a4cc69692dcc" containerName="registry-server" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.723417 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="453006f5-8304-47d9-b9d8-a4cc69692dcc" containerName="registry-server" Feb 02 07:22:59 crc kubenswrapper[4842]: E0202 07:22:59.723429 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="453006f5-8304-47d9-b9d8-a4cc69692dcc" containerName="extract-utilities" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.723439 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="453006f5-8304-47d9-b9d8-a4cc69692dcc" containerName="extract-utilities" Feb 02 07:22:59 crc kubenswrapper[4842]: E0202 07:22:59.723457 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca00d8b2-3728-456f-bf49-285fb31385ef" containerName="extract-content" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.723465 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca00d8b2-3728-456f-bf49-285fb31385ef" containerName="extract-content" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.723658 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="453006f5-8304-47d9-b9d8-a4cc69692dcc" containerName="registry-server" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.723682 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca00d8b2-3728-456f-bf49-285fb31385ef" containerName="registry-server" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.731171 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.733910 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5vntr"] Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.757100 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnwjv\" (UniqueName: \"kubernetes.io/projected/5dd671b4-cc04-4a87-a275-dea779856d29-kube-api-access-wnwjv\") pod \"redhat-marketplace-5vntr\" (UID: \"5dd671b4-cc04-4a87-a275-dea779856d29\") " pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.757279 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dd671b4-cc04-4a87-a275-dea779856d29-utilities\") pod \"redhat-marketplace-5vntr\" (UID: \"5dd671b4-cc04-4a87-a275-dea779856d29\") " pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.757361 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dd671b4-cc04-4a87-a275-dea779856d29-catalog-content\") pod \"redhat-marketplace-5vntr\" (UID: \"5dd671b4-cc04-4a87-a275-dea779856d29\") " pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.857880 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dd671b4-cc04-4a87-a275-dea779856d29-catalog-content\") pod \"redhat-marketplace-5vntr\" (UID: \"5dd671b4-cc04-4a87-a275-dea779856d29\") " pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.857972 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnwjv\" (UniqueName: \"kubernetes.io/projected/5dd671b4-cc04-4a87-a275-dea779856d29-kube-api-access-wnwjv\") pod \"redhat-marketplace-5vntr\" (UID: \"5dd671b4-cc04-4a87-a275-dea779856d29\") " pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.858062 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dd671b4-cc04-4a87-a275-dea779856d29-utilities\") pod \"redhat-marketplace-5vntr\" (UID: \"5dd671b4-cc04-4a87-a275-dea779856d29\") " pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.858783 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dd671b4-cc04-4a87-a275-dea779856d29-catalog-content\") pod \"redhat-marketplace-5vntr\" (UID: \"5dd671b4-cc04-4a87-a275-dea779856d29\") " pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.858802 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dd671b4-cc04-4a87-a275-dea779856d29-utilities\") pod \"redhat-marketplace-5vntr\" (UID: \"5dd671b4-cc04-4a87-a275-dea779856d29\") " pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.879251 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnwjv\" (UniqueName: \"kubernetes.io/projected/5dd671b4-cc04-4a87-a275-dea779856d29-kube-api-access-wnwjv\") pod \"redhat-marketplace-5vntr\" (UID: \"5dd671b4-cc04-4a87-a275-dea779856d29\") " pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:23:00 crc kubenswrapper[4842]: I0202 07:23:00.057961 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:23:00 crc kubenswrapper[4842]: I0202 07:23:00.529431 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5vntr"] Feb 02 07:23:01 crc kubenswrapper[4842]: I0202 07:23:01.031688 4842 generic.go:334] "Generic (PLEG): container finished" podID="5dd671b4-cc04-4a87-a275-dea779856d29" containerID="794af9097d27e6c13f2acda19d0709f130908212fcd3ec959407fc246da1e315" exitCode=0 Feb 02 07:23:01 crc kubenswrapper[4842]: I0202 07:23:01.031862 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vntr" event={"ID":"5dd671b4-cc04-4a87-a275-dea779856d29","Type":"ContainerDied","Data":"794af9097d27e6c13f2acda19d0709f130908212fcd3ec959407fc246da1e315"} Feb 02 07:23:01 crc kubenswrapper[4842]: I0202 07:23:01.032332 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vntr" event={"ID":"5dd671b4-cc04-4a87-a275-dea779856d29","Type":"ContainerStarted","Data":"e5a81788552af8e52157e4072852e261209293db616808e28f3f1089dc73b9a0"} Feb 02 07:23:03 crc kubenswrapper[4842]: I0202 07:23:03.059697 4842 generic.go:334] "Generic (PLEG): container finished" podID="5dd671b4-cc04-4a87-a275-dea779856d29" containerID="ee413c34ffebda9ea6c4ca19141537cad8ad2a3933bd2c0f16d1c733361f30c3" exitCode=0 Feb 02 07:23:03 crc kubenswrapper[4842]: I0202 07:23:03.059791 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vntr" event={"ID":"5dd671b4-cc04-4a87-a275-dea779856d29","Type":"ContainerDied","Data":"ee413c34ffebda9ea6c4ca19141537cad8ad2a3933bd2c0f16d1c733361f30c3"} Feb 02 07:23:04 crc kubenswrapper[4842]: I0202 07:23:04.072366 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vntr" event={"ID":"5dd671b4-cc04-4a87-a275-dea779856d29","Type":"ContainerStarted","Data":"8643a609541066e97e50c3d4fce3029229aa3638ba291eb849983bdceb67ecfb"} Feb 02 07:23:04 crc kubenswrapper[4842]: I0202 07:23:04.106084 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5vntr" podStartSLOduration=2.576027165 podStartE2EDuration="5.106048639s" podCreationTimestamp="2026-02-02 07:22:59 +0000 UTC" firstStartedPulling="2026-02-02 07:23:01.035071522 +0000 UTC m=+2206.412339464" lastFinishedPulling="2026-02-02 07:23:03.565093026 +0000 UTC m=+2208.942360938" observedRunningTime="2026-02-02 07:23:04.100668536 +0000 UTC m=+2209.477936478" watchObservedRunningTime="2026-02-02 07:23:04.106048639 +0000 UTC m=+2209.483316581" Feb 02 07:23:07 crc kubenswrapper[4842]: I0202 07:23:07.433548 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:23:07 crc kubenswrapper[4842]: E0202 07:23:07.434147 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:23:10 crc kubenswrapper[4842]: I0202 07:23:10.058553 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:23:10 crc kubenswrapper[4842]: I0202 07:23:10.058654 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:23:10 crc kubenswrapper[4842]: I0202 07:23:10.129883 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:23:10 crc kubenswrapper[4842]: I0202 07:23:10.216584 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:23:10 crc kubenswrapper[4842]: I0202 07:23:10.379727 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5vntr"] Feb 02 07:23:12 crc kubenswrapper[4842]: I0202 07:23:12.148303 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5vntr" podUID="5dd671b4-cc04-4a87-a275-dea779856d29" containerName="registry-server" containerID="cri-o://8643a609541066e97e50c3d4fce3029229aa3638ba291eb849983bdceb67ecfb" gracePeriod=2 Feb 02 07:23:12 crc kubenswrapper[4842]: I0202 07:23:12.606002 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:23:12 crc kubenswrapper[4842]: I0202 07:23:12.776639 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnwjv\" (UniqueName: \"kubernetes.io/projected/5dd671b4-cc04-4a87-a275-dea779856d29-kube-api-access-wnwjv\") pod \"5dd671b4-cc04-4a87-a275-dea779856d29\" (UID: \"5dd671b4-cc04-4a87-a275-dea779856d29\") " Feb 02 07:23:12 crc kubenswrapper[4842]: I0202 07:23:12.776711 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dd671b4-cc04-4a87-a275-dea779856d29-catalog-content\") pod \"5dd671b4-cc04-4a87-a275-dea779856d29\" (UID: \"5dd671b4-cc04-4a87-a275-dea779856d29\") " Feb 02 07:23:12 crc kubenswrapper[4842]: I0202 07:23:12.776756 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dd671b4-cc04-4a87-a275-dea779856d29-utilities\") pod \"5dd671b4-cc04-4a87-a275-dea779856d29\" (UID: \"5dd671b4-cc04-4a87-a275-dea779856d29\") " Feb 02 07:23:12 crc kubenswrapper[4842]: I0202 07:23:12.778697 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dd671b4-cc04-4a87-a275-dea779856d29-utilities" (OuterVolumeSpecName: "utilities") pod "5dd671b4-cc04-4a87-a275-dea779856d29" (UID: "5dd671b4-cc04-4a87-a275-dea779856d29"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:23:12 crc kubenswrapper[4842]: I0202 07:23:12.784472 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dd671b4-cc04-4a87-a275-dea779856d29-kube-api-access-wnwjv" (OuterVolumeSpecName: "kube-api-access-wnwjv") pod "5dd671b4-cc04-4a87-a275-dea779856d29" (UID: "5dd671b4-cc04-4a87-a275-dea779856d29"). InnerVolumeSpecName "kube-api-access-wnwjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:23:12 crc kubenswrapper[4842]: I0202 07:23:12.823936 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dd671b4-cc04-4a87-a275-dea779856d29-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5dd671b4-cc04-4a87-a275-dea779856d29" (UID: "5dd671b4-cc04-4a87-a275-dea779856d29"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:23:12 crc kubenswrapper[4842]: I0202 07:23:12.878346 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnwjv\" (UniqueName: \"kubernetes.io/projected/5dd671b4-cc04-4a87-a275-dea779856d29-kube-api-access-wnwjv\") on node \"crc\" DevicePath \"\"" Feb 02 07:23:12 crc kubenswrapper[4842]: I0202 07:23:12.878384 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dd671b4-cc04-4a87-a275-dea779856d29-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:23:12 crc kubenswrapper[4842]: I0202 07:23:12.878397 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dd671b4-cc04-4a87-a275-dea779856d29-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:23:13 crc kubenswrapper[4842]: I0202 07:23:13.162187 4842 generic.go:334] "Generic (PLEG): container finished" podID="5dd671b4-cc04-4a87-a275-dea779856d29" containerID="8643a609541066e97e50c3d4fce3029229aa3638ba291eb849983bdceb67ecfb" exitCode=0 Feb 02 07:23:13 crc kubenswrapper[4842]: I0202 07:23:13.162291 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vntr" event={"ID":"5dd671b4-cc04-4a87-a275-dea779856d29","Type":"ContainerDied","Data":"8643a609541066e97e50c3d4fce3029229aa3638ba291eb849983bdceb67ecfb"} Feb 02 07:23:13 crc kubenswrapper[4842]: I0202 07:23:13.162342 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:23:13 crc kubenswrapper[4842]: I0202 07:23:13.162358 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vntr" event={"ID":"5dd671b4-cc04-4a87-a275-dea779856d29","Type":"ContainerDied","Data":"e5a81788552af8e52157e4072852e261209293db616808e28f3f1089dc73b9a0"} Feb 02 07:23:13 crc kubenswrapper[4842]: I0202 07:23:13.162393 4842 scope.go:117] "RemoveContainer" containerID="8643a609541066e97e50c3d4fce3029229aa3638ba291eb849983bdceb67ecfb" Feb 02 07:23:13 crc kubenswrapper[4842]: I0202 07:23:13.192179 4842 scope.go:117] "RemoveContainer" containerID="ee413c34ffebda9ea6c4ca19141537cad8ad2a3933bd2c0f16d1c733361f30c3" Feb 02 07:23:13 crc kubenswrapper[4842]: I0202 07:23:13.226742 4842 scope.go:117] "RemoveContainer" containerID="794af9097d27e6c13f2acda19d0709f130908212fcd3ec959407fc246da1e315" Feb 02 07:23:13 crc kubenswrapper[4842]: I0202 07:23:13.234441 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5vntr"] Feb 02 07:23:13 crc kubenswrapper[4842]: I0202 07:23:13.246628 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5vntr"] Feb 02 07:23:13 crc kubenswrapper[4842]: I0202 07:23:13.276471 4842 scope.go:117] "RemoveContainer" containerID="8643a609541066e97e50c3d4fce3029229aa3638ba291eb849983bdceb67ecfb" Feb 02 07:23:13 crc kubenswrapper[4842]: E0202 07:23:13.280641 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8643a609541066e97e50c3d4fce3029229aa3638ba291eb849983bdceb67ecfb\": container with ID starting with 8643a609541066e97e50c3d4fce3029229aa3638ba291eb849983bdceb67ecfb not found: ID does not exist" containerID="8643a609541066e97e50c3d4fce3029229aa3638ba291eb849983bdceb67ecfb" Feb 02 07:23:13 crc kubenswrapper[4842]: I0202 07:23:13.280708 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8643a609541066e97e50c3d4fce3029229aa3638ba291eb849983bdceb67ecfb"} err="failed to get container status \"8643a609541066e97e50c3d4fce3029229aa3638ba291eb849983bdceb67ecfb\": rpc error: code = NotFound desc = could not find container \"8643a609541066e97e50c3d4fce3029229aa3638ba291eb849983bdceb67ecfb\": container with ID starting with 8643a609541066e97e50c3d4fce3029229aa3638ba291eb849983bdceb67ecfb not found: ID does not exist" Feb 02 07:23:13 crc kubenswrapper[4842]: I0202 07:23:13.280750 4842 scope.go:117] "RemoveContainer" containerID="ee413c34ffebda9ea6c4ca19141537cad8ad2a3933bd2c0f16d1c733361f30c3" Feb 02 07:23:13 crc kubenswrapper[4842]: E0202 07:23:13.281282 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee413c34ffebda9ea6c4ca19141537cad8ad2a3933bd2c0f16d1c733361f30c3\": container with ID starting with ee413c34ffebda9ea6c4ca19141537cad8ad2a3933bd2c0f16d1c733361f30c3 not found: ID does not exist" containerID="ee413c34ffebda9ea6c4ca19141537cad8ad2a3933bd2c0f16d1c733361f30c3" Feb 02 07:23:13 crc kubenswrapper[4842]: I0202 07:23:13.281325 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee413c34ffebda9ea6c4ca19141537cad8ad2a3933bd2c0f16d1c733361f30c3"} err="failed to get container status \"ee413c34ffebda9ea6c4ca19141537cad8ad2a3933bd2c0f16d1c733361f30c3\": rpc error: code = NotFound desc = could not find container \"ee413c34ffebda9ea6c4ca19141537cad8ad2a3933bd2c0f16d1c733361f30c3\": container with ID starting with ee413c34ffebda9ea6c4ca19141537cad8ad2a3933bd2c0f16d1c733361f30c3 not found: ID does not exist" Feb 02 07:23:13 crc kubenswrapper[4842]: I0202 07:23:13.281358 4842 scope.go:117] "RemoveContainer" containerID="794af9097d27e6c13f2acda19d0709f130908212fcd3ec959407fc246da1e315" Feb 02 07:23:13 crc kubenswrapper[4842]: E0202 07:23:13.281696 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"794af9097d27e6c13f2acda19d0709f130908212fcd3ec959407fc246da1e315\": container with ID starting with 794af9097d27e6c13f2acda19d0709f130908212fcd3ec959407fc246da1e315 not found: ID does not exist" containerID="794af9097d27e6c13f2acda19d0709f130908212fcd3ec959407fc246da1e315" Feb 02 07:23:13 crc kubenswrapper[4842]: I0202 07:23:13.281821 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"794af9097d27e6c13f2acda19d0709f130908212fcd3ec959407fc246da1e315"} err="failed to get container status \"794af9097d27e6c13f2acda19d0709f130908212fcd3ec959407fc246da1e315\": rpc error: code = NotFound desc = could not find container \"794af9097d27e6c13f2acda19d0709f130908212fcd3ec959407fc246da1e315\": container with ID starting with 794af9097d27e6c13f2acda19d0709f130908212fcd3ec959407fc246da1e315 not found: ID does not exist" Feb 02 07:23:13 crc kubenswrapper[4842]: I0202 07:23:13.450015 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dd671b4-cc04-4a87-a275-dea779856d29" path="/var/lib/kubelet/pods/5dd671b4-cc04-4a87-a275-dea779856d29/volumes" Feb 02 07:23:19 crc kubenswrapper[4842]: I0202 07:23:19.433374 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:23:19 crc kubenswrapper[4842]: E0202 07:23:19.433869 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:23:32 crc kubenswrapper[4842]: I0202 07:23:32.433776 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:23:32 crc kubenswrapper[4842]: E0202 07:23:32.434950 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:23:45 crc kubenswrapper[4842]: I0202 07:23:45.439932 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:23:45 crc kubenswrapper[4842]: E0202 07:23:45.440945 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:23:57 crc kubenswrapper[4842]: I0202 07:23:57.904893 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s582b"] Feb 02 07:23:57 crc kubenswrapper[4842]: E0202 07:23:57.906277 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dd671b4-cc04-4a87-a275-dea779856d29" containerName="registry-server" Feb 02 07:23:57 crc kubenswrapper[4842]: I0202 07:23:57.906312 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dd671b4-cc04-4a87-a275-dea779856d29" containerName="registry-server" Feb 02 07:23:57 crc kubenswrapper[4842]: E0202 07:23:57.906361 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dd671b4-cc04-4a87-a275-dea779856d29" containerName="extract-content" Feb 02 07:23:57 crc kubenswrapper[4842]: I0202 07:23:57.906381 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dd671b4-cc04-4a87-a275-dea779856d29" containerName="extract-content" Feb 02 07:23:57 crc kubenswrapper[4842]: E0202 07:23:57.906427 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dd671b4-cc04-4a87-a275-dea779856d29" containerName="extract-utilities" Feb 02 07:23:57 crc kubenswrapper[4842]: I0202 07:23:57.906444 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dd671b4-cc04-4a87-a275-dea779856d29" containerName="extract-utilities" Feb 02 07:23:57 crc kubenswrapper[4842]: I0202 07:23:57.906786 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dd671b4-cc04-4a87-a275-dea779856d29" containerName="registry-server" Feb 02 07:23:57 crc kubenswrapper[4842]: I0202 07:23:57.909091 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s582b" Feb 02 07:23:57 crc kubenswrapper[4842]: I0202 07:23:57.941911 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s582b"] Feb 02 07:23:58 crc kubenswrapper[4842]: I0202 07:23:58.037983 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-utilities\") pod \"community-operators-s582b\" (UID: \"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1\") " pod="openshift-marketplace/community-operators-s582b" Feb 02 07:23:58 crc kubenswrapper[4842]: I0202 07:23:58.038889 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dmft\" (UniqueName: \"kubernetes.io/projected/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-kube-api-access-2dmft\") pod \"community-operators-s582b\" (UID: \"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1\") " pod="openshift-marketplace/community-operators-s582b" Feb 02 07:23:58 crc kubenswrapper[4842]: I0202 07:23:58.039044 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-catalog-content\") pod \"community-operators-s582b\" (UID: \"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1\") " pod="openshift-marketplace/community-operators-s582b" Feb 02 07:23:58 crc kubenswrapper[4842]: I0202 07:23:58.146072 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dmft\" (UniqueName: \"kubernetes.io/projected/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-kube-api-access-2dmft\") pod \"community-operators-s582b\" (UID: \"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1\") " pod="openshift-marketplace/community-operators-s582b" Feb 02 07:23:58 crc kubenswrapper[4842]: I0202 07:23:58.146191 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-catalog-content\") pod \"community-operators-s582b\" (UID: \"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1\") " pod="openshift-marketplace/community-operators-s582b" Feb 02 07:23:58 crc kubenswrapper[4842]: I0202 07:23:58.146237 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-utilities\") pod \"community-operators-s582b\" (UID: \"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1\") " pod="openshift-marketplace/community-operators-s582b" Feb 02 07:23:58 crc kubenswrapper[4842]: I0202 07:23:58.146676 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-catalog-content\") pod \"community-operators-s582b\" (UID: \"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1\") " pod="openshift-marketplace/community-operators-s582b" Feb 02 07:23:58 crc kubenswrapper[4842]: I0202 07:23:58.146719 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-utilities\") pod \"community-operators-s582b\" (UID: \"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1\") " pod="openshift-marketplace/community-operators-s582b" Feb 02 07:23:58 crc kubenswrapper[4842]: I0202 07:23:58.165779 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dmft\" (UniqueName: \"kubernetes.io/projected/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-kube-api-access-2dmft\") pod \"community-operators-s582b\" (UID: \"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1\") " pod="openshift-marketplace/community-operators-s582b" Feb 02 07:23:58 crc kubenswrapper[4842]: I0202 07:23:58.236498 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s582b" Feb 02 07:23:58 crc kubenswrapper[4842]: I0202 07:23:58.716108 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s582b"] Feb 02 07:23:59 crc kubenswrapper[4842]: I0202 07:23:59.674419 4842 generic.go:334] "Generic (PLEG): container finished" podID="abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1" containerID="af204c5189d9b8847efa2cd19ae068ce6c295f017121b3df0224eb4bcee68cbb" exitCode=0 Feb 02 07:23:59 crc kubenswrapper[4842]: I0202 07:23:59.674486 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s582b" event={"ID":"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1","Type":"ContainerDied","Data":"af204c5189d9b8847efa2cd19ae068ce6c295f017121b3df0224eb4bcee68cbb"} Feb 02 07:23:59 crc kubenswrapper[4842]: I0202 07:23:59.674526 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s582b" event={"ID":"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1","Type":"ContainerStarted","Data":"0718849fe0d8989a2e6ce298bf3ef3a019350d90cc19a1f763d7b226873cba7f"} Feb 02 07:24:00 crc kubenswrapper[4842]: I0202 07:24:00.433033 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:24:00 crc kubenswrapper[4842]: E0202 07:24:00.433565 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:24:00 crc kubenswrapper[4842]: I0202 07:24:00.685542 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s582b" event={"ID":"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1","Type":"ContainerStarted","Data":"21852c2541ea27c7f40a78dfcadba80d293751eeb97c813b16dc3173d0f39756"} Feb 02 07:24:01 crc kubenswrapper[4842]: I0202 07:24:01.696268 4842 generic.go:334] "Generic (PLEG): container finished" podID="abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1" containerID="21852c2541ea27c7f40a78dfcadba80d293751eeb97c813b16dc3173d0f39756" exitCode=0 Feb 02 07:24:01 crc kubenswrapper[4842]: I0202 07:24:01.696355 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s582b" event={"ID":"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1","Type":"ContainerDied","Data":"21852c2541ea27c7f40a78dfcadba80d293751eeb97c813b16dc3173d0f39756"} Feb 02 07:24:02 crc kubenswrapper[4842]: I0202 07:24:02.706487 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s582b" event={"ID":"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1","Type":"ContainerStarted","Data":"f23a106208ae057522b7bc7b86b8efcff94f4def5e938d69adfaf2fd020a3096"} Feb 02 07:24:02 crc kubenswrapper[4842]: I0202 07:24:02.749148 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s582b" podStartSLOduration=3.337602704 podStartE2EDuration="5.749129686s" podCreationTimestamp="2026-02-02 07:23:57 +0000 UTC" firstStartedPulling="2026-02-02 07:23:59.67821433 +0000 UTC m=+2265.055482282" lastFinishedPulling="2026-02-02 07:24:02.089741322 +0000 UTC m=+2267.467009264" observedRunningTime="2026-02-02 07:24:02.740296858 +0000 UTC m=+2268.117564790" watchObservedRunningTime="2026-02-02 07:24:02.749129686 +0000 UTC m=+2268.126397608" Feb 02 07:24:08 crc kubenswrapper[4842]: I0202 07:24:08.237121 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s582b" Feb 02 07:24:08 crc kubenswrapper[4842]: I0202 07:24:08.237502 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s582b" Feb 02 07:24:08 crc kubenswrapper[4842]: I0202 07:24:08.281134 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s582b" Feb 02 07:24:08 crc kubenswrapper[4842]: I0202 07:24:08.844319 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s582b" Feb 02 07:24:08 crc kubenswrapper[4842]: I0202 07:24:08.906890 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s582b"] Feb 02 07:24:10 crc kubenswrapper[4842]: I0202 07:24:10.788192 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s582b" podUID="abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1" containerName="registry-server" containerID="cri-o://f23a106208ae057522b7bc7b86b8efcff94f4def5e938d69adfaf2fd020a3096" gracePeriod=2 Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.755871 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s582b" Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.813017 4842 generic.go:334] "Generic (PLEG): container finished" podID="abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1" containerID="f23a106208ae057522b7bc7b86b8efcff94f4def5e938d69adfaf2fd020a3096" exitCode=0 Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.813070 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s582b" event={"ID":"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1","Type":"ContainerDied","Data":"f23a106208ae057522b7bc7b86b8efcff94f4def5e938d69adfaf2fd020a3096"} Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.813103 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s582b" event={"ID":"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1","Type":"ContainerDied","Data":"0718849fe0d8989a2e6ce298bf3ef3a019350d90cc19a1f763d7b226873cba7f"} Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.813128 4842 scope.go:117] "RemoveContainer" containerID="f23a106208ae057522b7bc7b86b8efcff94f4def5e938d69adfaf2fd020a3096" Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.813328 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s582b" Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.855541 4842 scope.go:117] "RemoveContainer" containerID="21852c2541ea27c7f40a78dfcadba80d293751eeb97c813b16dc3173d0f39756" Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.855699 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dmft\" (UniqueName: \"kubernetes.io/projected/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-kube-api-access-2dmft\") pod \"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1\" (UID: \"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1\") " Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.855786 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-utilities\") pod \"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1\" (UID: \"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1\") " Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.855902 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-catalog-content\") pod \"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1\" (UID: \"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1\") " Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.856706 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-utilities" (OuterVolumeSpecName: "utilities") pod "abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1" (UID: "abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.870057 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-kube-api-access-2dmft" (OuterVolumeSpecName: "kube-api-access-2dmft") pod "abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1" (UID: "abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1"). InnerVolumeSpecName "kube-api-access-2dmft". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.890327 4842 scope.go:117] "RemoveContainer" containerID="af204c5189d9b8847efa2cd19ae068ce6c295f017121b3df0224eb4bcee68cbb" Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.916020 4842 scope.go:117] "RemoveContainer" containerID="f23a106208ae057522b7bc7b86b8efcff94f4def5e938d69adfaf2fd020a3096" Feb 02 07:24:11 crc kubenswrapper[4842]: E0202 07:24:11.916359 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f23a106208ae057522b7bc7b86b8efcff94f4def5e938d69adfaf2fd020a3096\": container with ID starting with f23a106208ae057522b7bc7b86b8efcff94f4def5e938d69adfaf2fd020a3096 not found: ID does not exist" containerID="f23a106208ae057522b7bc7b86b8efcff94f4def5e938d69adfaf2fd020a3096" Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.916408 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f23a106208ae057522b7bc7b86b8efcff94f4def5e938d69adfaf2fd020a3096"} err="failed to get container status \"f23a106208ae057522b7bc7b86b8efcff94f4def5e938d69adfaf2fd020a3096\": rpc error: code = NotFound desc = could not find container \"f23a106208ae057522b7bc7b86b8efcff94f4def5e938d69adfaf2fd020a3096\": container with ID starting with f23a106208ae057522b7bc7b86b8efcff94f4def5e938d69adfaf2fd020a3096 not found: ID does not exist" Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.916432 4842 scope.go:117] "RemoveContainer" containerID="21852c2541ea27c7f40a78dfcadba80d293751eeb97c813b16dc3173d0f39756" Feb 02 07:24:11 crc kubenswrapper[4842]: E0202 07:24:11.916916 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21852c2541ea27c7f40a78dfcadba80d293751eeb97c813b16dc3173d0f39756\": container with ID starting with 21852c2541ea27c7f40a78dfcadba80d293751eeb97c813b16dc3173d0f39756 not found: ID does not exist" containerID="21852c2541ea27c7f40a78dfcadba80d293751eeb97c813b16dc3173d0f39756" Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.916946 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21852c2541ea27c7f40a78dfcadba80d293751eeb97c813b16dc3173d0f39756"} err="failed to get container status \"21852c2541ea27c7f40a78dfcadba80d293751eeb97c813b16dc3173d0f39756\": rpc error: code = NotFound desc = could not find container \"21852c2541ea27c7f40a78dfcadba80d293751eeb97c813b16dc3173d0f39756\": container with ID starting with 21852c2541ea27c7f40a78dfcadba80d293751eeb97c813b16dc3173d0f39756 not found: ID does not exist" Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.917021 4842 scope.go:117] "RemoveContainer" containerID="af204c5189d9b8847efa2cd19ae068ce6c295f017121b3df0224eb4bcee68cbb" Feb 02 07:24:11 crc kubenswrapper[4842]: E0202 07:24:11.917271 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af204c5189d9b8847efa2cd19ae068ce6c295f017121b3df0224eb4bcee68cbb\": container with ID starting with af204c5189d9b8847efa2cd19ae068ce6c295f017121b3df0224eb4bcee68cbb not found: ID does not exist" containerID="af204c5189d9b8847efa2cd19ae068ce6c295f017121b3df0224eb4bcee68cbb" Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.917294 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af204c5189d9b8847efa2cd19ae068ce6c295f017121b3df0224eb4bcee68cbb"} err="failed to get container status \"af204c5189d9b8847efa2cd19ae068ce6c295f017121b3df0224eb4bcee68cbb\": rpc error: code = NotFound desc = could not find container \"af204c5189d9b8847efa2cd19ae068ce6c295f017121b3df0224eb4bcee68cbb\": container with ID starting with af204c5189d9b8847efa2cd19ae068ce6c295f017121b3df0224eb4bcee68cbb not found: ID does not exist" Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.924146 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1" (UID: "abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.957352 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.957388 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dmft\" (UniqueName: \"kubernetes.io/projected/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-kube-api-access-2dmft\") on node \"crc\" DevicePath \"\"" Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.957402 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:24:12 crc kubenswrapper[4842]: I0202 07:24:12.172747 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s582b"] Feb 02 07:24:12 crc kubenswrapper[4842]: I0202 07:24:12.180962 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s582b"] Feb 02 07:24:13 crc kubenswrapper[4842]: I0202 07:24:13.447755 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1" path="/var/lib/kubelet/pods/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1/volumes" Feb 02 07:24:14 crc kubenswrapper[4842]: I0202 07:24:14.434327 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:24:14 crc kubenswrapper[4842]: E0202 07:24:14.434986 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:24:28 crc kubenswrapper[4842]: I0202 07:24:28.434586 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:24:28 crc kubenswrapper[4842]: E0202 07:24:28.435814 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:24:40 crc kubenswrapper[4842]: I0202 07:24:40.434262 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:24:40 crc kubenswrapper[4842]: E0202 07:24:40.435133 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:24:53 crc kubenswrapper[4842]: I0202 07:24:53.575612 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:24:53 crc kubenswrapper[4842]: E0202 07:24:53.576959 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:25:05 crc kubenswrapper[4842]: I0202 07:25:05.440814 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:25:05 crc kubenswrapper[4842]: E0202 07:25:05.441843 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:25:17 crc kubenswrapper[4842]: I0202 07:25:17.433043 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:25:17 crc kubenswrapper[4842]: E0202 07:25:17.433903 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:25:31 crc kubenswrapper[4842]: I0202 07:25:31.434371 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:25:31 crc kubenswrapper[4842]: E0202 07:25:31.435717 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:25:44 crc kubenswrapper[4842]: I0202 07:25:44.434475 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:25:44 crc kubenswrapper[4842]: E0202 07:25:44.435739 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:25:58 crc kubenswrapper[4842]: I0202 07:25:58.433350 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:25:58 crc kubenswrapper[4842]: E0202 07:25:58.434313 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:26:11 crc kubenswrapper[4842]: I0202 07:26:11.433352 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:26:11 crc kubenswrapper[4842]: E0202 07:26:11.434382 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:26:21 crc kubenswrapper[4842]: I0202 07:26:21.389637 4842 scope.go:117] "RemoveContainer" containerID="26b03de8273eeb8c731faea10ebe84f0a97c933934818912e8d4605f3c713f26" Feb 02 07:26:21 crc kubenswrapper[4842]: I0202 07:26:21.422890 4842 scope.go:117] "RemoveContainer" containerID="15961ca3966c5e19bf382f4ff38a45f3b4f496271c3a403b37983001d2953ade" Feb 02 07:26:21 crc kubenswrapper[4842]: I0202 07:26:21.453933 4842 scope.go:117] "RemoveContainer" containerID="54d29c0b963abf2e6cbe9930fdfb039211d0f6d3757608dff7e813a74402f5e9" Feb 02 07:26:23 crc kubenswrapper[4842]: I0202 07:26:23.433748 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:26:23 crc kubenswrapper[4842]: E0202 07:26:23.434545 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:26:37 crc kubenswrapper[4842]: I0202 07:26:37.439029 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:26:37 crc kubenswrapper[4842]: E0202 07:26:37.439969 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:26:50 crc kubenswrapper[4842]: I0202 07:26:50.434331 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:26:50 crc kubenswrapper[4842]: E0202 07:26:50.435190 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:27:03 crc kubenswrapper[4842]: I0202 07:27:03.434098 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:27:03 crc kubenswrapper[4842]: E0202 07:27:03.434788 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:27:14 crc kubenswrapper[4842]: I0202 07:27:14.434271 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:27:15 crc kubenswrapper[4842]: I0202 07:27:15.474999 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"a9931981a4064c9f36b17b435306ca3fae47f32d429034eb76a44a6791939efc"} Feb 02 07:29:42 crc kubenswrapper[4842]: I0202 07:29:42.146496 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:29:42 crc kubenswrapper[4842]: I0202 07:29:42.147113 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.155371 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7"] Feb 02 07:30:00 crc kubenswrapper[4842]: E0202 07:30:00.156295 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1" containerName="registry-server" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.156311 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1" containerName="registry-server" Feb 02 07:30:00 crc kubenswrapper[4842]: E0202 07:30:00.156334 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1" containerName="extract-utilities" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.156341 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1" containerName="extract-utilities" Feb 02 07:30:00 crc kubenswrapper[4842]: E0202 07:30:00.156352 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1" containerName="extract-content" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.156361 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1" containerName="extract-content" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.156541 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1" containerName="registry-server" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.157061 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.159573 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.160517 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.172548 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7"] Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.349794 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxw7l\" (UniqueName: \"kubernetes.io/projected/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-kube-api-access-bxw7l\") pod \"collect-profiles-29500290-4rjz7\" (UID: \"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.349865 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-config-volume\") pod \"collect-profiles-29500290-4rjz7\" (UID: \"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.349932 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-secret-volume\") pod \"collect-profiles-29500290-4rjz7\" (UID: \"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.451739 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxw7l\" (UniqueName: \"kubernetes.io/projected/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-kube-api-access-bxw7l\") pod \"collect-profiles-29500290-4rjz7\" (UID: \"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.451795 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-config-volume\") pod \"collect-profiles-29500290-4rjz7\" (UID: \"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.451829 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-secret-volume\") pod \"collect-profiles-29500290-4rjz7\" (UID: \"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.452774 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-config-volume\") pod \"collect-profiles-29500290-4rjz7\" (UID: \"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.470369 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-secret-volume\") pod \"collect-profiles-29500290-4rjz7\" (UID: \"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.471820 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxw7l\" (UniqueName: \"kubernetes.io/projected/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-kube-api-access-bxw7l\") pod \"collect-profiles-29500290-4rjz7\" (UID: \"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.527667 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7" Feb 02 07:30:01 crc kubenswrapper[4842]: I0202 07:30:01.014794 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7"] Feb 02 07:30:01 crc kubenswrapper[4842]: I0202 07:30:01.910100 4842 generic.go:334] "Generic (PLEG): container finished" podID="2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe" containerID="8dbf1ff40ae24c1cb278330205be0fe8707c50279bf4f5b00c195cfdd226a43f" exitCode=0 Feb 02 07:30:01 crc kubenswrapper[4842]: I0202 07:30:01.910154 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7" event={"ID":"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe","Type":"ContainerDied","Data":"8dbf1ff40ae24c1cb278330205be0fe8707c50279bf4f5b00c195cfdd226a43f"} Feb 02 07:30:01 crc kubenswrapper[4842]: I0202 07:30:01.910179 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7" event={"ID":"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe","Type":"ContainerStarted","Data":"42234ac27dca0ee5645ba71bf9d5f3f8fe88e03e1320e7bf8885da12a745dbd4"} Feb 02 07:30:03 crc kubenswrapper[4842]: I0202 07:30:03.263265 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7" Feb 02 07:30:03 crc kubenswrapper[4842]: I0202 07:30:03.395890 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-config-volume\") pod \"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe\" (UID: \"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe\") " Feb 02 07:30:03 crc kubenswrapper[4842]: I0202 07:30:03.396378 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-secret-volume\") pod \"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe\" (UID: \"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe\") " Feb 02 07:30:03 crc kubenswrapper[4842]: I0202 07:30:03.396543 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxw7l\" (UniqueName: \"kubernetes.io/projected/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-kube-api-access-bxw7l\") pod \"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe\" (UID: \"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe\") " Feb 02 07:30:03 crc kubenswrapper[4842]: I0202 07:30:03.397141 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-config-volume" (OuterVolumeSpecName: "config-volume") pod "2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe" (UID: "2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:30:03 crc kubenswrapper[4842]: I0202 07:30:03.403640 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe" (UID: "2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:30:03 crc kubenswrapper[4842]: I0202 07:30:03.404480 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-kube-api-access-bxw7l" (OuterVolumeSpecName: "kube-api-access-bxw7l") pod "2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe" (UID: "2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe"). InnerVolumeSpecName "kube-api-access-bxw7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:30:03 crc kubenswrapper[4842]: I0202 07:30:03.497787 4842 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:03 crc kubenswrapper[4842]: I0202 07:30:03.497821 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxw7l\" (UniqueName: \"kubernetes.io/projected/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-kube-api-access-bxw7l\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:03 crc kubenswrapper[4842]: I0202 07:30:03.497831 4842 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:03 crc kubenswrapper[4842]: I0202 07:30:03.933570 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7" event={"ID":"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe","Type":"ContainerDied","Data":"42234ac27dca0ee5645ba71bf9d5f3f8fe88e03e1320e7bf8885da12a745dbd4"} Feb 02 07:30:03 crc kubenswrapper[4842]: I0202 07:30:03.933680 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42234ac27dca0ee5645ba71bf9d5f3f8fe88e03e1320e7bf8885da12a745dbd4" Feb 02 07:30:03 crc kubenswrapper[4842]: I0202 07:30:03.933742 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7" Feb 02 07:30:04 crc kubenswrapper[4842]: I0202 07:30:04.369526 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw"] Feb 02 07:30:04 crc kubenswrapper[4842]: I0202 07:30:04.379795 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw"] Feb 02 07:30:05 crc kubenswrapper[4842]: I0202 07:30:05.446925 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b43b464-5623-46bb-8097-65b505d08960" path="/var/lib/kubelet/pods/5b43b464-5623-46bb-8097-65b505d08960/volumes" Feb 02 07:30:12 crc kubenswrapper[4842]: I0202 07:30:12.146544 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:30:12 crc kubenswrapper[4842]: I0202 07:30:12.147210 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:30:21 crc kubenswrapper[4842]: I0202 07:30:21.569771 4842 scope.go:117] "RemoveContainer" containerID="ba19112a26c109422079efb77e0284d9fe51d522c7191998e89b078a7d34963e" Feb 02 07:30:42 crc kubenswrapper[4842]: I0202 07:30:42.146420 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:30:42 crc kubenswrapper[4842]: I0202 07:30:42.147290 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:30:42 crc kubenswrapper[4842]: I0202 07:30:42.147362 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 07:30:42 crc kubenswrapper[4842]: I0202 07:30:42.148310 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a9931981a4064c9f36b17b435306ca3fae47f32d429034eb76a44a6791939efc"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 07:30:42 crc kubenswrapper[4842]: I0202 07:30:42.148404 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://a9931981a4064c9f36b17b435306ca3fae47f32d429034eb76a44a6791939efc" gracePeriod=600 Feb 02 07:30:42 crc kubenswrapper[4842]: I0202 07:30:42.295111 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="a9931981a4064c9f36b17b435306ca3fae47f32d429034eb76a44a6791939efc" exitCode=0 Feb 02 07:30:42 crc kubenswrapper[4842]: I0202 07:30:42.295267 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"a9931981a4064c9f36b17b435306ca3fae47f32d429034eb76a44a6791939efc"} Feb 02 07:30:42 crc kubenswrapper[4842]: I0202 07:30:42.295333 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:30:43 crc kubenswrapper[4842]: I0202 07:30:43.308500 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda"} Feb 02 07:32:42 crc kubenswrapper[4842]: I0202 07:32:42.146189 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:32:42 crc kubenswrapper[4842]: I0202 07:32:42.146947 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:33:12 crc kubenswrapper[4842]: I0202 07:33:12.146025 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:33:12 crc kubenswrapper[4842]: I0202 07:33:12.146865 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:33:38 crc kubenswrapper[4842]: I0202 07:33:38.108650 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hdmsn"] Feb 02 07:33:38 crc kubenswrapper[4842]: E0202 07:33:38.110137 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe" containerName="collect-profiles" Feb 02 07:33:38 crc kubenswrapper[4842]: I0202 07:33:38.110281 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe" containerName="collect-profiles" Feb 02 07:33:38 crc kubenswrapper[4842]: I0202 07:33:38.111469 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe" containerName="collect-profiles" Feb 02 07:33:38 crc kubenswrapper[4842]: I0202 07:33:38.113396 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:38 crc kubenswrapper[4842]: I0202 07:33:38.145086 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hdmsn"] Feb 02 07:33:38 crc kubenswrapper[4842]: I0202 07:33:38.232373 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e169b475-82e9-44a2-8cd1-9b1290cbc992-catalog-content\") pod \"redhat-marketplace-hdmsn\" (UID: \"e169b475-82e9-44a2-8cd1-9b1290cbc992\") " pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:38 crc kubenswrapper[4842]: I0202 07:33:38.232756 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e169b475-82e9-44a2-8cd1-9b1290cbc992-utilities\") pod \"redhat-marketplace-hdmsn\" (UID: \"e169b475-82e9-44a2-8cd1-9b1290cbc992\") " pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:38 crc kubenswrapper[4842]: I0202 07:33:38.232886 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmg56\" (UniqueName: \"kubernetes.io/projected/e169b475-82e9-44a2-8cd1-9b1290cbc992-kube-api-access-dmg56\") pod \"redhat-marketplace-hdmsn\" (UID: \"e169b475-82e9-44a2-8cd1-9b1290cbc992\") " pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:38 crc kubenswrapper[4842]: I0202 07:33:38.334611 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmg56\" (UniqueName: \"kubernetes.io/projected/e169b475-82e9-44a2-8cd1-9b1290cbc992-kube-api-access-dmg56\") pod \"redhat-marketplace-hdmsn\" (UID: \"e169b475-82e9-44a2-8cd1-9b1290cbc992\") " pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:38 crc kubenswrapper[4842]: I0202 07:33:38.334931 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e169b475-82e9-44a2-8cd1-9b1290cbc992-catalog-content\") pod \"redhat-marketplace-hdmsn\" (UID: \"e169b475-82e9-44a2-8cd1-9b1290cbc992\") " pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:38 crc kubenswrapper[4842]: I0202 07:33:38.335013 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e169b475-82e9-44a2-8cd1-9b1290cbc992-utilities\") pod \"redhat-marketplace-hdmsn\" (UID: \"e169b475-82e9-44a2-8cd1-9b1290cbc992\") " pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:38 crc kubenswrapper[4842]: I0202 07:33:38.336582 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e169b475-82e9-44a2-8cd1-9b1290cbc992-utilities\") pod \"redhat-marketplace-hdmsn\" (UID: \"e169b475-82e9-44a2-8cd1-9b1290cbc992\") " pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:38 crc kubenswrapper[4842]: I0202 07:33:38.337120 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e169b475-82e9-44a2-8cd1-9b1290cbc992-catalog-content\") pod \"redhat-marketplace-hdmsn\" (UID: \"e169b475-82e9-44a2-8cd1-9b1290cbc992\") " pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:38 crc kubenswrapper[4842]: I0202 07:33:38.377957 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmg56\" (UniqueName: \"kubernetes.io/projected/e169b475-82e9-44a2-8cd1-9b1290cbc992-kube-api-access-dmg56\") pod \"redhat-marketplace-hdmsn\" (UID: \"e169b475-82e9-44a2-8cd1-9b1290cbc992\") " pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:38 crc kubenswrapper[4842]: I0202 07:33:38.453531 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:38 crc kubenswrapper[4842]: I0202 07:33:38.898594 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hdmsn"] Feb 02 07:33:38 crc kubenswrapper[4842]: W0202 07:33:38.905357 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode169b475_82e9_44a2_8cd1_9b1290cbc992.slice/crio-79f80c5b2cdb00639563a8b46652fde2f054fc2dd31e3e5a67f6a910c405d8d7 WatchSource:0}: Error finding container 79f80c5b2cdb00639563a8b46652fde2f054fc2dd31e3e5a67f6a910c405d8d7: Status 404 returned error can't find the container with id 79f80c5b2cdb00639563a8b46652fde2f054fc2dd31e3e5a67f6a910c405d8d7 Feb 02 07:33:39 crc kubenswrapper[4842]: I0202 07:33:39.264154 4842 generic.go:334] "Generic (PLEG): container finished" podID="e169b475-82e9-44a2-8cd1-9b1290cbc992" containerID="f58febd6dc85b54e8a4c723b3a4025b183513c26c8c6afa917221192d506a141" exitCode=0 Feb 02 07:33:39 crc kubenswrapper[4842]: I0202 07:33:39.264286 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hdmsn" event={"ID":"e169b475-82e9-44a2-8cd1-9b1290cbc992","Type":"ContainerDied","Data":"f58febd6dc85b54e8a4c723b3a4025b183513c26c8c6afa917221192d506a141"} Feb 02 07:33:39 crc kubenswrapper[4842]: I0202 07:33:39.264362 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hdmsn" event={"ID":"e169b475-82e9-44a2-8cd1-9b1290cbc992","Type":"ContainerStarted","Data":"79f80c5b2cdb00639563a8b46652fde2f054fc2dd31e3e5a67f6a910c405d8d7"} Feb 02 07:33:39 crc kubenswrapper[4842]: I0202 07:33:39.266881 4842 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 07:33:41 crc kubenswrapper[4842]: I0202 07:33:41.284908 4842 generic.go:334] "Generic (PLEG): container finished" podID="e169b475-82e9-44a2-8cd1-9b1290cbc992" containerID="d6dfe2c0ce97a9d24fa29db221abf0065e5b4a09a6ef65404d035be7e72e0327" exitCode=0 Feb 02 07:33:41 crc kubenswrapper[4842]: I0202 07:33:41.285087 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hdmsn" event={"ID":"e169b475-82e9-44a2-8cd1-9b1290cbc992","Type":"ContainerDied","Data":"d6dfe2c0ce97a9d24fa29db221abf0065e5b4a09a6ef65404d035be7e72e0327"} Feb 02 07:33:42 crc kubenswrapper[4842]: I0202 07:33:42.146577 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:33:42 crc kubenswrapper[4842]: I0202 07:33:42.146961 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:33:42 crc kubenswrapper[4842]: I0202 07:33:42.147011 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 07:33:42 crc kubenswrapper[4842]: I0202 07:33:42.147674 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 07:33:42 crc kubenswrapper[4842]: I0202 07:33:42.147743 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" gracePeriod=600 Feb 02 07:33:42 crc kubenswrapper[4842]: E0202 07:33:42.270726 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:33:42 crc kubenswrapper[4842]: I0202 07:33:42.297877 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" exitCode=0 Feb 02 07:33:42 crc kubenswrapper[4842]: I0202 07:33:42.297927 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda"} Feb 02 07:33:42 crc kubenswrapper[4842]: I0202 07:33:42.298020 4842 scope.go:117] "RemoveContainer" containerID="a9931981a4064c9f36b17b435306ca3fae47f32d429034eb76a44a6791939efc" Feb 02 07:33:42 crc kubenswrapper[4842]: I0202 07:33:42.298638 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:33:42 crc kubenswrapper[4842]: E0202 07:33:42.298910 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:33:42 crc kubenswrapper[4842]: I0202 07:33:42.300540 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hdmsn" event={"ID":"e169b475-82e9-44a2-8cd1-9b1290cbc992","Type":"ContainerStarted","Data":"c2a1f43d4b82fe9f34bf9ba9273993f86ef7adc90e54d4191541430b4ae8f5c1"} Feb 02 07:33:42 crc kubenswrapper[4842]: I0202 07:33:42.367327 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hdmsn" podStartSLOduration=1.745830914 podStartE2EDuration="4.36730983s" podCreationTimestamp="2026-02-02 07:33:38 +0000 UTC" firstStartedPulling="2026-02-02 07:33:39.266439234 +0000 UTC m=+2844.643707176" lastFinishedPulling="2026-02-02 07:33:41.88791814 +0000 UTC m=+2847.265186092" observedRunningTime="2026-02-02 07:33:42.36448272 +0000 UTC m=+2847.741750642" watchObservedRunningTime="2026-02-02 07:33:42.36730983 +0000 UTC m=+2847.744577742" Feb 02 07:33:48 crc kubenswrapper[4842]: I0202 07:33:48.453883 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:48 crc kubenswrapper[4842]: I0202 07:33:48.454786 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:48 crc kubenswrapper[4842]: I0202 07:33:48.525471 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:49 crc kubenswrapper[4842]: I0202 07:33:49.448211 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:49 crc kubenswrapper[4842]: I0202 07:33:49.522944 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hdmsn"] Feb 02 07:33:51 crc kubenswrapper[4842]: I0202 07:33:51.374816 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hdmsn" podUID="e169b475-82e9-44a2-8cd1-9b1290cbc992" containerName="registry-server" containerID="cri-o://c2a1f43d4b82fe9f34bf9ba9273993f86ef7adc90e54d4191541430b4ae8f5c1" gracePeriod=2 Feb 02 07:33:51 crc kubenswrapper[4842]: I0202 07:33:51.783169 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:51 crc kubenswrapper[4842]: I0202 07:33:51.854493 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e169b475-82e9-44a2-8cd1-9b1290cbc992-utilities\") pod \"e169b475-82e9-44a2-8cd1-9b1290cbc992\" (UID: \"e169b475-82e9-44a2-8cd1-9b1290cbc992\") " Feb 02 07:33:51 crc kubenswrapper[4842]: I0202 07:33:51.862322 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e169b475-82e9-44a2-8cd1-9b1290cbc992-utilities" (OuterVolumeSpecName: "utilities") pod "e169b475-82e9-44a2-8cd1-9b1290cbc992" (UID: "e169b475-82e9-44a2-8cd1-9b1290cbc992"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:33:51 crc kubenswrapper[4842]: I0202 07:33:51.862565 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e169b475-82e9-44a2-8cd1-9b1290cbc992-catalog-content\") pod \"e169b475-82e9-44a2-8cd1-9b1290cbc992\" (UID: \"e169b475-82e9-44a2-8cd1-9b1290cbc992\") " Feb 02 07:33:51 crc kubenswrapper[4842]: I0202 07:33:51.865494 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmg56\" (UniqueName: \"kubernetes.io/projected/e169b475-82e9-44a2-8cd1-9b1290cbc992-kube-api-access-dmg56\") pod \"e169b475-82e9-44a2-8cd1-9b1290cbc992\" (UID: \"e169b475-82e9-44a2-8cd1-9b1290cbc992\") " Feb 02 07:33:51 crc kubenswrapper[4842]: I0202 07:33:51.865971 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e169b475-82e9-44a2-8cd1-9b1290cbc992-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:33:51 crc kubenswrapper[4842]: I0202 07:33:51.871780 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e169b475-82e9-44a2-8cd1-9b1290cbc992-kube-api-access-dmg56" (OuterVolumeSpecName: "kube-api-access-dmg56") pod "e169b475-82e9-44a2-8cd1-9b1290cbc992" (UID: "e169b475-82e9-44a2-8cd1-9b1290cbc992"). InnerVolumeSpecName "kube-api-access-dmg56". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:33:51 crc kubenswrapper[4842]: I0202 07:33:51.890101 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e169b475-82e9-44a2-8cd1-9b1290cbc992-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e169b475-82e9-44a2-8cd1-9b1290cbc992" (UID: "e169b475-82e9-44a2-8cd1-9b1290cbc992"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:33:51 crc kubenswrapper[4842]: I0202 07:33:51.966951 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e169b475-82e9-44a2-8cd1-9b1290cbc992-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:33:51 crc kubenswrapper[4842]: I0202 07:33:51.966980 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmg56\" (UniqueName: \"kubernetes.io/projected/e169b475-82e9-44a2-8cd1-9b1290cbc992-kube-api-access-dmg56\") on node \"crc\" DevicePath \"\"" Feb 02 07:33:52 crc kubenswrapper[4842]: I0202 07:33:52.384032 4842 generic.go:334] "Generic (PLEG): container finished" podID="e169b475-82e9-44a2-8cd1-9b1290cbc992" containerID="c2a1f43d4b82fe9f34bf9ba9273993f86ef7adc90e54d4191541430b4ae8f5c1" exitCode=0 Feb 02 07:33:52 crc kubenswrapper[4842]: I0202 07:33:52.384091 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hdmsn" event={"ID":"e169b475-82e9-44a2-8cd1-9b1290cbc992","Type":"ContainerDied","Data":"c2a1f43d4b82fe9f34bf9ba9273993f86ef7adc90e54d4191541430b4ae8f5c1"} Feb 02 07:33:52 crc kubenswrapper[4842]: I0202 07:33:52.384106 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:52 crc kubenswrapper[4842]: I0202 07:33:52.384125 4842 scope.go:117] "RemoveContainer" containerID="c2a1f43d4b82fe9f34bf9ba9273993f86ef7adc90e54d4191541430b4ae8f5c1" Feb 02 07:33:52 crc kubenswrapper[4842]: I0202 07:33:52.384115 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hdmsn" event={"ID":"e169b475-82e9-44a2-8cd1-9b1290cbc992","Type":"ContainerDied","Data":"79f80c5b2cdb00639563a8b46652fde2f054fc2dd31e3e5a67f6a910c405d8d7"} Feb 02 07:33:52 crc kubenswrapper[4842]: I0202 07:33:52.405760 4842 scope.go:117] "RemoveContainer" containerID="d6dfe2c0ce97a9d24fa29db221abf0065e5b4a09a6ef65404d035be7e72e0327" Feb 02 07:33:52 crc kubenswrapper[4842]: I0202 07:33:52.422808 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hdmsn"] Feb 02 07:33:52 crc kubenswrapper[4842]: I0202 07:33:52.429379 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hdmsn"] Feb 02 07:33:52 crc kubenswrapper[4842]: I0202 07:33:52.440150 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:33:52 crc kubenswrapper[4842]: E0202 07:33:52.440398 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:33:52 crc kubenswrapper[4842]: I0202 07:33:52.443873 4842 scope.go:117] "RemoveContainer" containerID="f58febd6dc85b54e8a4c723b3a4025b183513c26c8c6afa917221192d506a141" Feb 02 07:33:52 crc kubenswrapper[4842]: I0202 07:33:52.469866 4842 scope.go:117] "RemoveContainer" containerID="c2a1f43d4b82fe9f34bf9ba9273993f86ef7adc90e54d4191541430b4ae8f5c1" Feb 02 07:33:52 crc kubenswrapper[4842]: E0202 07:33:52.470470 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2a1f43d4b82fe9f34bf9ba9273993f86ef7adc90e54d4191541430b4ae8f5c1\": container with ID starting with c2a1f43d4b82fe9f34bf9ba9273993f86ef7adc90e54d4191541430b4ae8f5c1 not found: ID does not exist" containerID="c2a1f43d4b82fe9f34bf9ba9273993f86ef7adc90e54d4191541430b4ae8f5c1" Feb 02 07:33:52 crc kubenswrapper[4842]: I0202 07:33:52.470539 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2a1f43d4b82fe9f34bf9ba9273993f86ef7adc90e54d4191541430b4ae8f5c1"} err="failed to get container status \"c2a1f43d4b82fe9f34bf9ba9273993f86ef7adc90e54d4191541430b4ae8f5c1\": rpc error: code = NotFound desc = could not find container \"c2a1f43d4b82fe9f34bf9ba9273993f86ef7adc90e54d4191541430b4ae8f5c1\": container with ID starting with c2a1f43d4b82fe9f34bf9ba9273993f86ef7adc90e54d4191541430b4ae8f5c1 not found: ID does not exist" Feb 02 07:33:52 crc kubenswrapper[4842]: I0202 07:33:52.470573 4842 scope.go:117] "RemoveContainer" containerID="d6dfe2c0ce97a9d24fa29db221abf0065e5b4a09a6ef65404d035be7e72e0327" Feb 02 07:33:52 crc kubenswrapper[4842]: E0202 07:33:52.471166 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6dfe2c0ce97a9d24fa29db221abf0065e5b4a09a6ef65404d035be7e72e0327\": container with ID starting with d6dfe2c0ce97a9d24fa29db221abf0065e5b4a09a6ef65404d035be7e72e0327 not found: ID does not exist" containerID="d6dfe2c0ce97a9d24fa29db221abf0065e5b4a09a6ef65404d035be7e72e0327" Feb 02 07:33:52 crc kubenswrapper[4842]: I0202 07:33:52.471205 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6dfe2c0ce97a9d24fa29db221abf0065e5b4a09a6ef65404d035be7e72e0327"} err="failed to get container status \"d6dfe2c0ce97a9d24fa29db221abf0065e5b4a09a6ef65404d035be7e72e0327\": rpc error: code = NotFound desc = could not find container \"d6dfe2c0ce97a9d24fa29db221abf0065e5b4a09a6ef65404d035be7e72e0327\": container with ID starting with d6dfe2c0ce97a9d24fa29db221abf0065e5b4a09a6ef65404d035be7e72e0327 not found: ID does not exist" Feb 02 07:33:52 crc kubenswrapper[4842]: I0202 07:33:52.471250 4842 scope.go:117] "RemoveContainer" containerID="f58febd6dc85b54e8a4c723b3a4025b183513c26c8c6afa917221192d506a141" Feb 02 07:33:52 crc kubenswrapper[4842]: E0202 07:33:52.471625 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f58febd6dc85b54e8a4c723b3a4025b183513c26c8c6afa917221192d506a141\": container with ID starting with f58febd6dc85b54e8a4c723b3a4025b183513c26c8c6afa917221192d506a141 not found: ID does not exist" containerID="f58febd6dc85b54e8a4c723b3a4025b183513c26c8c6afa917221192d506a141" Feb 02 07:33:52 crc kubenswrapper[4842]: I0202 07:33:52.471688 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f58febd6dc85b54e8a4c723b3a4025b183513c26c8c6afa917221192d506a141"} err="failed to get container status \"f58febd6dc85b54e8a4c723b3a4025b183513c26c8c6afa917221192d506a141\": rpc error: code = NotFound desc = could not find container \"f58febd6dc85b54e8a4c723b3a4025b183513c26c8c6afa917221192d506a141\": container with ID starting with f58febd6dc85b54e8a4c723b3a4025b183513c26c8c6afa917221192d506a141 not found: ID does not exist" Feb 02 07:33:53 crc kubenswrapper[4842]: I0202 07:33:53.448563 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e169b475-82e9-44a2-8cd1-9b1290cbc992" path="/var/lib/kubelet/pods/e169b475-82e9-44a2-8cd1-9b1290cbc992/volumes" Feb 02 07:34:03 crc kubenswrapper[4842]: I0202 07:34:03.433785 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:34:03 crc kubenswrapper[4842]: E0202 07:34:03.434783 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:34:14 crc kubenswrapper[4842]: I0202 07:34:14.434631 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:34:14 crc kubenswrapper[4842]: E0202 07:34:14.436201 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:34:15 crc kubenswrapper[4842]: I0202 07:34:15.892252 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2wsvb"] Feb 02 07:34:15 crc kubenswrapper[4842]: E0202 07:34:15.893272 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e169b475-82e9-44a2-8cd1-9b1290cbc992" containerName="extract-utilities" Feb 02 07:34:15 crc kubenswrapper[4842]: I0202 07:34:15.893306 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="e169b475-82e9-44a2-8cd1-9b1290cbc992" containerName="extract-utilities" Feb 02 07:34:15 crc kubenswrapper[4842]: E0202 07:34:15.893367 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e169b475-82e9-44a2-8cd1-9b1290cbc992" containerName="extract-content" Feb 02 07:34:15 crc kubenswrapper[4842]: I0202 07:34:15.893389 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="e169b475-82e9-44a2-8cd1-9b1290cbc992" containerName="extract-content" Feb 02 07:34:15 crc kubenswrapper[4842]: E0202 07:34:15.893414 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e169b475-82e9-44a2-8cd1-9b1290cbc992" containerName="registry-server" Feb 02 07:34:15 crc kubenswrapper[4842]: I0202 07:34:15.893429 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="e169b475-82e9-44a2-8cd1-9b1290cbc992" containerName="registry-server" Feb 02 07:34:15 crc kubenswrapper[4842]: I0202 07:34:15.893733 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="e169b475-82e9-44a2-8cd1-9b1290cbc992" containerName="registry-server" Feb 02 07:34:15 crc kubenswrapper[4842]: I0202 07:34:15.896852 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:15 crc kubenswrapper[4842]: I0202 07:34:15.960813 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2wsvb"] Feb 02 07:34:16 crc kubenswrapper[4842]: I0202 07:34:16.046823 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnztg\" (UniqueName: \"kubernetes.io/projected/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-kube-api-access-mnztg\") pod \"redhat-operators-2wsvb\" (UID: \"ab4626e6-200f-4cd6-937d-4eb7cf9911ab\") " pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:16 crc kubenswrapper[4842]: I0202 07:34:16.046913 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-utilities\") pod \"redhat-operators-2wsvb\" (UID: \"ab4626e6-200f-4cd6-937d-4eb7cf9911ab\") " pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:16 crc kubenswrapper[4842]: I0202 07:34:16.046959 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-catalog-content\") pod \"redhat-operators-2wsvb\" (UID: \"ab4626e6-200f-4cd6-937d-4eb7cf9911ab\") " pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:16 crc kubenswrapper[4842]: I0202 07:34:16.148275 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-catalog-content\") pod \"redhat-operators-2wsvb\" (UID: \"ab4626e6-200f-4cd6-937d-4eb7cf9911ab\") " pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:16 crc kubenswrapper[4842]: I0202 07:34:16.148442 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnztg\" (UniqueName: \"kubernetes.io/projected/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-kube-api-access-mnztg\") pod \"redhat-operators-2wsvb\" (UID: \"ab4626e6-200f-4cd6-937d-4eb7cf9911ab\") " pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:16 crc kubenswrapper[4842]: I0202 07:34:16.148506 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-utilities\") pod \"redhat-operators-2wsvb\" (UID: \"ab4626e6-200f-4cd6-937d-4eb7cf9911ab\") " pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:16 crc kubenswrapper[4842]: I0202 07:34:16.148831 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-catalog-content\") pod \"redhat-operators-2wsvb\" (UID: \"ab4626e6-200f-4cd6-937d-4eb7cf9911ab\") " pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:16 crc kubenswrapper[4842]: I0202 07:34:16.148959 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-utilities\") pod \"redhat-operators-2wsvb\" (UID: \"ab4626e6-200f-4cd6-937d-4eb7cf9911ab\") " pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:16 crc kubenswrapper[4842]: I0202 07:34:16.173459 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnztg\" (UniqueName: \"kubernetes.io/projected/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-kube-api-access-mnztg\") pod \"redhat-operators-2wsvb\" (UID: \"ab4626e6-200f-4cd6-937d-4eb7cf9911ab\") " pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:16 crc kubenswrapper[4842]: I0202 07:34:16.268536 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:16 crc kubenswrapper[4842]: I0202 07:34:16.526354 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2wsvb"] Feb 02 07:34:16 crc kubenswrapper[4842]: I0202 07:34:16.626425 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wsvb" event={"ID":"ab4626e6-200f-4cd6-937d-4eb7cf9911ab","Type":"ContainerStarted","Data":"1f32afced739696c72206844574f32ea8877ddc224d52507ad2399e87f80a1d6"} Feb 02 07:34:17 crc kubenswrapper[4842]: I0202 07:34:17.638979 4842 generic.go:334] "Generic (PLEG): container finished" podID="ab4626e6-200f-4cd6-937d-4eb7cf9911ab" containerID="f2cc66db62cf6e553c069a58c3115b94b137acad42647a9788e36a837c71756c" exitCode=0 Feb 02 07:34:17 crc kubenswrapper[4842]: I0202 07:34:17.639049 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wsvb" event={"ID":"ab4626e6-200f-4cd6-937d-4eb7cf9911ab","Type":"ContainerDied","Data":"f2cc66db62cf6e553c069a58c3115b94b137acad42647a9788e36a837c71756c"} Feb 02 07:34:17 crc kubenswrapper[4842]: I0202 07:34:17.693751 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tcqpr"] Feb 02 07:34:17 crc kubenswrapper[4842]: I0202 07:34:17.696402 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:17 crc kubenswrapper[4842]: I0202 07:34:17.713662 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tcqpr"] Feb 02 07:34:17 crc kubenswrapper[4842]: I0202 07:34:17.800156 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a5e892e-8cde-49ea-ad01-14593db40e0e-utilities\") pod \"certified-operators-tcqpr\" (UID: \"9a5e892e-8cde-49ea-ad01-14593db40e0e\") " pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:17 crc kubenswrapper[4842]: I0202 07:34:17.800596 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl9w4\" (UniqueName: \"kubernetes.io/projected/9a5e892e-8cde-49ea-ad01-14593db40e0e-kube-api-access-kl9w4\") pod \"certified-operators-tcqpr\" (UID: \"9a5e892e-8cde-49ea-ad01-14593db40e0e\") " pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:17 crc kubenswrapper[4842]: I0202 07:34:17.800676 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a5e892e-8cde-49ea-ad01-14593db40e0e-catalog-content\") pod \"certified-operators-tcqpr\" (UID: \"9a5e892e-8cde-49ea-ad01-14593db40e0e\") " pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:17 crc kubenswrapper[4842]: I0202 07:34:17.902375 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl9w4\" (UniqueName: \"kubernetes.io/projected/9a5e892e-8cde-49ea-ad01-14593db40e0e-kube-api-access-kl9w4\") pod \"certified-operators-tcqpr\" (UID: \"9a5e892e-8cde-49ea-ad01-14593db40e0e\") " pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:17 crc kubenswrapper[4842]: I0202 07:34:17.902458 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a5e892e-8cde-49ea-ad01-14593db40e0e-catalog-content\") pod \"certified-operators-tcqpr\" (UID: \"9a5e892e-8cde-49ea-ad01-14593db40e0e\") " pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:17 crc kubenswrapper[4842]: I0202 07:34:17.902528 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a5e892e-8cde-49ea-ad01-14593db40e0e-utilities\") pod \"certified-operators-tcqpr\" (UID: \"9a5e892e-8cde-49ea-ad01-14593db40e0e\") " pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:17 crc kubenswrapper[4842]: I0202 07:34:17.904028 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a5e892e-8cde-49ea-ad01-14593db40e0e-utilities\") pod \"certified-operators-tcqpr\" (UID: \"9a5e892e-8cde-49ea-ad01-14593db40e0e\") " pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:17 crc kubenswrapper[4842]: I0202 07:34:17.918747 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a5e892e-8cde-49ea-ad01-14593db40e0e-catalog-content\") pod \"certified-operators-tcqpr\" (UID: \"9a5e892e-8cde-49ea-ad01-14593db40e0e\") " pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:17 crc kubenswrapper[4842]: I0202 07:34:17.964721 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl9w4\" (UniqueName: \"kubernetes.io/projected/9a5e892e-8cde-49ea-ad01-14593db40e0e-kube-api-access-kl9w4\") pod \"certified-operators-tcqpr\" (UID: \"9a5e892e-8cde-49ea-ad01-14593db40e0e\") " pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:18 crc kubenswrapper[4842]: I0202 07:34:18.028661 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:18 crc kubenswrapper[4842]: I0202 07:34:18.548787 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tcqpr"] Feb 02 07:34:18 crc kubenswrapper[4842]: I0202 07:34:18.650575 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tcqpr" event={"ID":"9a5e892e-8cde-49ea-ad01-14593db40e0e","Type":"ContainerStarted","Data":"83159ccc32f7be030ca5abe567af2ef0943590860edaa29b62e4d57bd3a56973"} Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.079445 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qp6vd"] Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.081152 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.100512 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qp6vd"] Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.146268 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22349677-a0b4-43a2-9a43-61b9bbd55eed-catalog-content\") pod \"community-operators-qp6vd\" (UID: \"22349677-a0b4-43a2-9a43-61b9bbd55eed\") " pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.146395 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqltb\" (UniqueName: \"kubernetes.io/projected/22349677-a0b4-43a2-9a43-61b9bbd55eed-kube-api-access-cqltb\") pod \"community-operators-qp6vd\" (UID: \"22349677-a0b4-43a2-9a43-61b9bbd55eed\") " pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.146457 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22349677-a0b4-43a2-9a43-61b9bbd55eed-utilities\") pod \"community-operators-qp6vd\" (UID: \"22349677-a0b4-43a2-9a43-61b9bbd55eed\") " pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.247801 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22349677-a0b4-43a2-9a43-61b9bbd55eed-utilities\") pod \"community-operators-qp6vd\" (UID: \"22349677-a0b4-43a2-9a43-61b9bbd55eed\") " pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.247901 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22349677-a0b4-43a2-9a43-61b9bbd55eed-catalog-content\") pod \"community-operators-qp6vd\" (UID: \"22349677-a0b4-43a2-9a43-61b9bbd55eed\") " pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.247974 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqltb\" (UniqueName: \"kubernetes.io/projected/22349677-a0b4-43a2-9a43-61b9bbd55eed-kube-api-access-cqltb\") pod \"community-operators-qp6vd\" (UID: \"22349677-a0b4-43a2-9a43-61b9bbd55eed\") " pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.248393 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22349677-a0b4-43a2-9a43-61b9bbd55eed-catalog-content\") pod \"community-operators-qp6vd\" (UID: \"22349677-a0b4-43a2-9a43-61b9bbd55eed\") " pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.248531 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22349677-a0b4-43a2-9a43-61b9bbd55eed-utilities\") pod \"community-operators-qp6vd\" (UID: \"22349677-a0b4-43a2-9a43-61b9bbd55eed\") " pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.281696 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqltb\" (UniqueName: \"kubernetes.io/projected/22349677-a0b4-43a2-9a43-61b9bbd55eed-kube-api-access-cqltb\") pod \"community-operators-qp6vd\" (UID: \"22349677-a0b4-43a2-9a43-61b9bbd55eed\") " pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.407454 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.672962 4842 generic.go:334] "Generic (PLEG): container finished" podID="9a5e892e-8cde-49ea-ad01-14593db40e0e" containerID="040aad422592f01aeed3762b9f8803e85cdfa4536c1d5280744995de01e71d85" exitCode=0 Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.673088 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tcqpr" event={"ID":"9a5e892e-8cde-49ea-ad01-14593db40e0e","Type":"ContainerDied","Data":"040aad422592f01aeed3762b9f8803e85cdfa4536c1d5280744995de01e71d85"} Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.678276 4842 generic.go:334] "Generic (PLEG): container finished" podID="ab4626e6-200f-4cd6-937d-4eb7cf9911ab" containerID="4a728162b812c701e40c30b6bfdb1e59fe43e20a5f66c0dea0e6c490f5f7c43b" exitCode=0 Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.678324 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wsvb" event={"ID":"ab4626e6-200f-4cd6-937d-4eb7cf9911ab","Type":"ContainerDied","Data":"4a728162b812c701e40c30b6bfdb1e59fe43e20a5f66c0dea0e6c490f5f7c43b"} Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.905504 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qp6vd"] Feb 02 07:34:19 crc kubenswrapper[4842]: W0202 07:34:19.914425 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22349677_a0b4_43a2_9a43_61b9bbd55eed.slice/crio-1adcdbd0c81ef4178cdbf4dee7a1e44951efd5b8a20d82e6aa0762bba814c1bf WatchSource:0}: Error finding container 1adcdbd0c81ef4178cdbf4dee7a1e44951efd5b8a20d82e6aa0762bba814c1bf: Status 404 returned error can't find the container with id 1adcdbd0c81ef4178cdbf4dee7a1e44951efd5b8a20d82e6aa0762bba814c1bf Feb 02 07:34:20 crc kubenswrapper[4842]: I0202 07:34:20.689953 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wsvb" event={"ID":"ab4626e6-200f-4cd6-937d-4eb7cf9911ab","Type":"ContainerStarted","Data":"78cc880c748f040750a27d09076e66f4d53b57a35bd9f70291d80b1021605008"} Feb 02 07:34:20 crc kubenswrapper[4842]: I0202 07:34:20.692525 4842 generic.go:334] "Generic (PLEG): container finished" podID="22349677-a0b4-43a2-9a43-61b9bbd55eed" containerID="458f2af06b0b6dae87a01806bdcbfc5c8535b49d28661ed9aff39b5b756278a6" exitCode=0 Feb 02 07:34:20 crc kubenswrapper[4842]: I0202 07:34:20.692579 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qp6vd" event={"ID":"22349677-a0b4-43a2-9a43-61b9bbd55eed","Type":"ContainerDied","Data":"458f2af06b0b6dae87a01806bdcbfc5c8535b49d28661ed9aff39b5b756278a6"} Feb 02 07:34:20 crc kubenswrapper[4842]: I0202 07:34:20.692613 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qp6vd" event={"ID":"22349677-a0b4-43a2-9a43-61b9bbd55eed","Type":"ContainerStarted","Data":"1adcdbd0c81ef4178cdbf4dee7a1e44951efd5b8a20d82e6aa0762bba814c1bf"} Feb 02 07:34:20 crc kubenswrapper[4842]: I0202 07:34:20.696332 4842 generic.go:334] "Generic (PLEG): container finished" podID="9a5e892e-8cde-49ea-ad01-14593db40e0e" containerID="08c9cfe888034508575d594595d2a6b040714258a4e31d58abc3a38ab9e20ad2" exitCode=0 Feb 02 07:34:20 crc kubenswrapper[4842]: I0202 07:34:20.696356 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tcqpr" event={"ID":"9a5e892e-8cde-49ea-ad01-14593db40e0e","Type":"ContainerDied","Data":"08c9cfe888034508575d594595d2a6b040714258a4e31d58abc3a38ab9e20ad2"} Feb 02 07:34:20 crc kubenswrapper[4842]: I0202 07:34:20.720855 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2wsvb" podStartSLOduration=3.065599174 podStartE2EDuration="5.720836945s" podCreationTimestamp="2026-02-02 07:34:15 +0000 UTC" firstStartedPulling="2026-02-02 07:34:17.640838395 +0000 UTC m=+2883.018106337" lastFinishedPulling="2026-02-02 07:34:20.296076196 +0000 UTC m=+2885.673344108" observedRunningTime="2026-02-02 07:34:20.713303199 +0000 UTC m=+2886.090571121" watchObservedRunningTime="2026-02-02 07:34:20.720836945 +0000 UTC m=+2886.098104867" Feb 02 07:34:21 crc kubenswrapper[4842]: I0202 07:34:21.705584 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tcqpr" event={"ID":"9a5e892e-8cde-49ea-ad01-14593db40e0e","Type":"ContainerStarted","Data":"12cd925754a46d4caca4ee280c327b6659863bffe20945b184df7d09d66a616b"} Feb 02 07:34:21 crc kubenswrapper[4842]: I0202 07:34:21.707609 4842 generic.go:334] "Generic (PLEG): container finished" podID="22349677-a0b4-43a2-9a43-61b9bbd55eed" containerID="9ddd7e7d6ec60c2c3a65615f5dd58582f6b676141e2bddb1388f27f7acc6efd8" exitCode=0 Feb 02 07:34:21 crc kubenswrapper[4842]: I0202 07:34:21.708397 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qp6vd" event={"ID":"22349677-a0b4-43a2-9a43-61b9bbd55eed","Type":"ContainerDied","Data":"9ddd7e7d6ec60c2c3a65615f5dd58582f6b676141e2bddb1388f27f7acc6efd8"} Feb 02 07:34:21 crc kubenswrapper[4842]: I0202 07:34:21.748453 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tcqpr" podStartSLOduration=3.320714916 podStartE2EDuration="4.748430709s" podCreationTimestamp="2026-02-02 07:34:17 +0000 UTC" firstStartedPulling="2026-02-02 07:34:19.674858117 +0000 UTC m=+2885.052126029" lastFinishedPulling="2026-02-02 07:34:21.10257388 +0000 UTC m=+2886.479841822" observedRunningTime="2026-02-02 07:34:21.739206551 +0000 UTC m=+2887.116474463" watchObservedRunningTime="2026-02-02 07:34:21.748430709 +0000 UTC m=+2887.125698631" Feb 02 07:34:22 crc kubenswrapper[4842]: I0202 07:34:22.716670 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qp6vd" event={"ID":"22349677-a0b4-43a2-9a43-61b9bbd55eed","Type":"ContainerStarted","Data":"76deb60b5abc77a6556a5bbf7cb59524c453e7cfb1fea0d4ce4eb44a05e6d01c"} Feb 02 07:34:22 crc kubenswrapper[4842]: I0202 07:34:22.742718 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qp6vd" podStartSLOduration=2.131209228 podStartE2EDuration="3.742695507s" podCreationTimestamp="2026-02-02 07:34:19 +0000 UTC" firstStartedPulling="2026-02-02 07:34:20.693896679 +0000 UTC m=+2886.071164591" lastFinishedPulling="2026-02-02 07:34:22.305382958 +0000 UTC m=+2887.682650870" observedRunningTime="2026-02-02 07:34:22.740269037 +0000 UTC m=+2888.117536999" watchObservedRunningTime="2026-02-02 07:34:22.742695507 +0000 UTC m=+2888.119963419" Feb 02 07:34:26 crc kubenswrapper[4842]: I0202 07:34:26.268800 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:26 crc kubenswrapper[4842]: I0202 07:34:26.271492 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:27 crc kubenswrapper[4842]: I0202 07:34:27.327125 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2wsvb" podUID="ab4626e6-200f-4cd6-937d-4eb7cf9911ab" containerName="registry-server" probeResult="failure" output=< Feb 02 07:34:27 crc kubenswrapper[4842]: timeout: failed to connect service ":50051" within 1s Feb 02 07:34:27 crc kubenswrapper[4842]: > Feb 02 07:34:27 crc kubenswrapper[4842]: I0202 07:34:27.434724 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:34:27 crc kubenswrapper[4842]: E0202 07:34:27.435199 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:34:28 crc kubenswrapper[4842]: I0202 07:34:28.030597 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:28 crc kubenswrapper[4842]: I0202 07:34:28.030676 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:28 crc kubenswrapper[4842]: I0202 07:34:28.104106 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:28 crc kubenswrapper[4842]: I0202 07:34:28.836807 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:28 crc kubenswrapper[4842]: I0202 07:34:28.912387 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tcqpr"] Feb 02 07:34:29 crc kubenswrapper[4842]: I0202 07:34:29.408257 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:29 crc kubenswrapper[4842]: I0202 07:34:29.408849 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:29 crc kubenswrapper[4842]: I0202 07:34:29.459488 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:29 crc kubenswrapper[4842]: I0202 07:34:29.811957 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:30 crc kubenswrapper[4842]: I0202 07:34:30.758393 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qp6vd"] Feb 02 07:34:30 crc kubenswrapper[4842]: I0202 07:34:30.779137 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tcqpr" podUID="9a5e892e-8cde-49ea-ad01-14593db40e0e" containerName="registry-server" containerID="cri-o://12cd925754a46d4caca4ee280c327b6659863bffe20945b184df7d09d66a616b" gracePeriod=2 Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.722404 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.790789 4842 generic.go:334] "Generic (PLEG): container finished" podID="9a5e892e-8cde-49ea-ad01-14593db40e0e" containerID="12cd925754a46d4caca4ee280c327b6659863bffe20945b184df7d09d66a616b" exitCode=0 Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.790844 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.790884 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tcqpr" event={"ID":"9a5e892e-8cde-49ea-ad01-14593db40e0e","Type":"ContainerDied","Data":"12cd925754a46d4caca4ee280c327b6659863bffe20945b184df7d09d66a616b"} Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.790962 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tcqpr" event={"ID":"9a5e892e-8cde-49ea-ad01-14593db40e0e","Type":"ContainerDied","Data":"83159ccc32f7be030ca5abe567af2ef0943590860edaa29b62e4d57bd3a56973"} Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.790994 4842 scope.go:117] "RemoveContainer" containerID="12cd925754a46d4caca4ee280c327b6659863bffe20945b184df7d09d66a616b" Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.793077 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qp6vd" podUID="22349677-a0b4-43a2-9a43-61b9bbd55eed" containerName="registry-server" containerID="cri-o://76deb60b5abc77a6556a5bbf7cb59524c453e7cfb1fea0d4ce4eb44a05e6d01c" gracePeriod=2 Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.812270 4842 scope.go:117] "RemoveContainer" containerID="08c9cfe888034508575d594595d2a6b040714258a4e31d58abc3a38ab9e20ad2" Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.837003 4842 scope.go:117] "RemoveContainer" containerID="040aad422592f01aeed3762b9f8803e85cdfa4536c1d5280744995de01e71d85" Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.872588 4842 scope.go:117] "RemoveContainer" containerID="12cd925754a46d4caca4ee280c327b6659863bffe20945b184df7d09d66a616b" Feb 02 07:34:31 crc kubenswrapper[4842]: E0202 07:34:31.873035 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12cd925754a46d4caca4ee280c327b6659863bffe20945b184df7d09d66a616b\": container with ID starting with 12cd925754a46d4caca4ee280c327b6659863bffe20945b184df7d09d66a616b not found: ID does not exist" containerID="12cd925754a46d4caca4ee280c327b6659863bffe20945b184df7d09d66a616b" Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.873074 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12cd925754a46d4caca4ee280c327b6659863bffe20945b184df7d09d66a616b"} err="failed to get container status \"12cd925754a46d4caca4ee280c327b6659863bffe20945b184df7d09d66a616b\": rpc error: code = NotFound desc = could not find container \"12cd925754a46d4caca4ee280c327b6659863bffe20945b184df7d09d66a616b\": container with ID starting with 12cd925754a46d4caca4ee280c327b6659863bffe20945b184df7d09d66a616b not found: ID does not exist" Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.873100 4842 scope.go:117] "RemoveContainer" containerID="08c9cfe888034508575d594595d2a6b040714258a4e31d58abc3a38ab9e20ad2" Feb 02 07:34:31 crc kubenswrapper[4842]: E0202 07:34:31.873438 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08c9cfe888034508575d594595d2a6b040714258a4e31d58abc3a38ab9e20ad2\": container with ID starting with 08c9cfe888034508575d594595d2a6b040714258a4e31d58abc3a38ab9e20ad2 not found: ID does not exist" containerID="08c9cfe888034508575d594595d2a6b040714258a4e31d58abc3a38ab9e20ad2" Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.873463 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08c9cfe888034508575d594595d2a6b040714258a4e31d58abc3a38ab9e20ad2"} err="failed to get container status \"08c9cfe888034508575d594595d2a6b040714258a4e31d58abc3a38ab9e20ad2\": rpc error: code = NotFound desc = could not find container \"08c9cfe888034508575d594595d2a6b040714258a4e31d58abc3a38ab9e20ad2\": container with ID starting with 08c9cfe888034508575d594595d2a6b040714258a4e31d58abc3a38ab9e20ad2 not found: ID does not exist" Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.873476 4842 scope.go:117] "RemoveContainer" containerID="040aad422592f01aeed3762b9f8803e85cdfa4536c1d5280744995de01e71d85" Feb 02 07:34:31 crc kubenswrapper[4842]: E0202 07:34:31.873680 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"040aad422592f01aeed3762b9f8803e85cdfa4536c1d5280744995de01e71d85\": container with ID starting with 040aad422592f01aeed3762b9f8803e85cdfa4536c1d5280744995de01e71d85 not found: ID does not exist" containerID="040aad422592f01aeed3762b9f8803e85cdfa4536c1d5280744995de01e71d85" Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.873695 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"040aad422592f01aeed3762b9f8803e85cdfa4536c1d5280744995de01e71d85"} err="failed to get container status \"040aad422592f01aeed3762b9f8803e85cdfa4536c1d5280744995de01e71d85\": rpc error: code = NotFound desc = could not find container \"040aad422592f01aeed3762b9f8803e85cdfa4536c1d5280744995de01e71d85\": container with ID starting with 040aad422592f01aeed3762b9f8803e85cdfa4536c1d5280744995de01e71d85 not found: ID does not exist" Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.880901 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl9w4\" (UniqueName: \"kubernetes.io/projected/9a5e892e-8cde-49ea-ad01-14593db40e0e-kube-api-access-kl9w4\") pod \"9a5e892e-8cde-49ea-ad01-14593db40e0e\" (UID: \"9a5e892e-8cde-49ea-ad01-14593db40e0e\") " Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.880996 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a5e892e-8cde-49ea-ad01-14593db40e0e-catalog-content\") pod \"9a5e892e-8cde-49ea-ad01-14593db40e0e\" (UID: \"9a5e892e-8cde-49ea-ad01-14593db40e0e\") " Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.881575 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a5e892e-8cde-49ea-ad01-14593db40e0e-utilities\") pod \"9a5e892e-8cde-49ea-ad01-14593db40e0e\" (UID: \"9a5e892e-8cde-49ea-ad01-14593db40e0e\") " Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.882570 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a5e892e-8cde-49ea-ad01-14593db40e0e-utilities" (OuterVolumeSpecName: "utilities") pod "9a5e892e-8cde-49ea-ad01-14593db40e0e" (UID: "9a5e892e-8cde-49ea-ad01-14593db40e0e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.886828 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a5e892e-8cde-49ea-ad01-14593db40e0e-kube-api-access-kl9w4" (OuterVolumeSpecName: "kube-api-access-kl9w4") pod "9a5e892e-8cde-49ea-ad01-14593db40e0e" (UID: "9a5e892e-8cde-49ea-ad01-14593db40e0e"). InnerVolumeSpecName "kube-api-access-kl9w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.941995 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a5e892e-8cde-49ea-ad01-14593db40e0e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a5e892e-8cde-49ea-ad01-14593db40e0e" (UID: "9a5e892e-8cde-49ea-ad01-14593db40e0e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.983125 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kl9w4\" (UniqueName: \"kubernetes.io/projected/9a5e892e-8cde-49ea-ad01-14593db40e0e-kube-api-access-kl9w4\") on node \"crc\" DevicePath \"\"" Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.983155 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a5e892e-8cde-49ea-ad01-14593db40e0e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.983166 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a5e892e-8cde-49ea-ad01-14593db40e0e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.167940 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tcqpr"] Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.173504 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tcqpr"] Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.744926 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.799694 4842 generic.go:334] "Generic (PLEG): container finished" podID="22349677-a0b4-43a2-9a43-61b9bbd55eed" containerID="76deb60b5abc77a6556a5bbf7cb59524c453e7cfb1fea0d4ce4eb44a05e6d01c" exitCode=0 Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.799757 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qp6vd" event={"ID":"22349677-a0b4-43a2-9a43-61b9bbd55eed","Type":"ContainerDied","Data":"76deb60b5abc77a6556a5bbf7cb59524c453e7cfb1fea0d4ce4eb44a05e6d01c"} Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.799776 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.799781 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qp6vd" event={"ID":"22349677-a0b4-43a2-9a43-61b9bbd55eed","Type":"ContainerDied","Data":"1adcdbd0c81ef4178cdbf4dee7a1e44951efd5b8a20d82e6aa0762bba814c1bf"} Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.799797 4842 scope.go:117] "RemoveContainer" containerID="76deb60b5abc77a6556a5bbf7cb59524c453e7cfb1fea0d4ce4eb44a05e6d01c" Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.817905 4842 scope.go:117] "RemoveContainer" containerID="9ddd7e7d6ec60c2c3a65615f5dd58582f6b676141e2bddb1388f27f7acc6efd8" Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.837732 4842 scope.go:117] "RemoveContainer" containerID="458f2af06b0b6dae87a01806bdcbfc5c8535b49d28661ed9aff39b5b756278a6" Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.869269 4842 scope.go:117] "RemoveContainer" containerID="76deb60b5abc77a6556a5bbf7cb59524c453e7cfb1fea0d4ce4eb44a05e6d01c" Feb 02 07:34:32 crc kubenswrapper[4842]: E0202 07:34:32.869732 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76deb60b5abc77a6556a5bbf7cb59524c453e7cfb1fea0d4ce4eb44a05e6d01c\": container with ID starting with 76deb60b5abc77a6556a5bbf7cb59524c453e7cfb1fea0d4ce4eb44a05e6d01c not found: ID does not exist" containerID="76deb60b5abc77a6556a5bbf7cb59524c453e7cfb1fea0d4ce4eb44a05e6d01c" Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.869761 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76deb60b5abc77a6556a5bbf7cb59524c453e7cfb1fea0d4ce4eb44a05e6d01c"} err="failed to get container status \"76deb60b5abc77a6556a5bbf7cb59524c453e7cfb1fea0d4ce4eb44a05e6d01c\": rpc error: code = NotFound desc = could not find container \"76deb60b5abc77a6556a5bbf7cb59524c453e7cfb1fea0d4ce4eb44a05e6d01c\": container with ID starting with 76deb60b5abc77a6556a5bbf7cb59524c453e7cfb1fea0d4ce4eb44a05e6d01c not found: ID does not exist" Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.869785 4842 scope.go:117] "RemoveContainer" containerID="9ddd7e7d6ec60c2c3a65615f5dd58582f6b676141e2bddb1388f27f7acc6efd8" Feb 02 07:34:32 crc kubenswrapper[4842]: E0202 07:34:32.870182 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ddd7e7d6ec60c2c3a65615f5dd58582f6b676141e2bddb1388f27f7acc6efd8\": container with ID starting with 9ddd7e7d6ec60c2c3a65615f5dd58582f6b676141e2bddb1388f27f7acc6efd8 not found: ID does not exist" containerID="9ddd7e7d6ec60c2c3a65615f5dd58582f6b676141e2bddb1388f27f7acc6efd8" Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.870262 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ddd7e7d6ec60c2c3a65615f5dd58582f6b676141e2bddb1388f27f7acc6efd8"} err="failed to get container status \"9ddd7e7d6ec60c2c3a65615f5dd58582f6b676141e2bddb1388f27f7acc6efd8\": rpc error: code = NotFound desc = could not find container \"9ddd7e7d6ec60c2c3a65615f5dd58582f6b676141e2bddb1388f27f7acc6efd8\": container with ID starting with 9ddd7e7d6ec60c2c3a65615f5dd58582f6b676141e2bddb1388f27f7acc6efd8 not found: ID does not exist" Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.870310 4842 scope.go:117] "RemoveContainer" containerID="458f2af06b0b6dae87a01806bdcbfc5c8535b49d28661ed9aff39b5b756278a6" Feb 02 07:34:32 crc kubenswrapper[4842]: E0202 07:34:32.870648 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"458f2af06b0b6dae87a01806bdcbfc5c8535b49d28661ed9aff39b5b756278a6\": container with ID starting with 458f2af06b0b6dae87a01806bdcbfc5c8535b49d28661ed9aff39b5b756278a6 not found: ID does not exist" containerID="458f2af06b0b6dae87a01806bdcbfc5c8535b49d28661ed9aff39b5b756278a6" Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.870673 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"458f2af06b0b6dae87a01806bdcbfc5c8535b49d28661ed9aff39b5b756278a6"} err="failed to get container status \"458f2af06b0b6dae87a01806bdcbfc5c8535b49d28661ed9aff39b5b756278a6\": rpc error: code = NotFound desc = could not find container \"458f2af06b0b6dae87a01806bdcbfc5c8535b49d28661ed9aff39b5b756278a6\": container with ID starting with 458f2af06b0b6dae87a01806bdcbfc5c8535b49d28661ed9aff39b5b756278a6 not found: ID does not exist" Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.896871 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqltb\" (UniqueName: \"kubernetes.io/projected/22349677-a0b4-43a2-9a43-61b9bbd55eed-kube-api-access-cqltb\") pod \"22349677-a0b4-43a2-9a43-61b9bbd55eed\" (UID: \"22349677-a0b4-43a2-9a43-61b9bbd55eed\") " Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.897068 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22349677-a0b4-43a2-9a43-61b9bbd55eed-catalog-content\") pod \"22349677-a0b4-43a2-9a43-61b9bbd55eed\" (UID: \"22349677-a0b4-43a2-9a43-61b9bbd55eed\") " Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.897271 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22349677-a0b4-43a2-9a43-61b9bbd55eed-utilities\") pod \"22349677-a0b4-43a2-9a43-61b9bbd55eed\" (UID: \"22349677-a0b4-43a2-9a43-61b9bbd55eed\") " Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.898843 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22349677-a0b4-43a2-9a43-61b9bbd55eed-utilities" (OuterVolumeSpecName: "utilities") pod "22349677-a0b4-43a2-9a43-61b9bbd55eed" (UID: "22349677-a0b4-43a2-9a43-61b9bbd55eed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.905724 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22349677-a0b4-43a2-9a43-61b9bbd55eed-kube-api-access-cqltb" (OuterVolumeSpecName: "kube-api-access-cqltb") pod "22349677-a0b4-43a2-9a43-61b9bbd55eed" (UID: "22349677-a0b4-43a2-9a43-61b9bbd55eed"). InnerVolumeSpecName "kube-api-access-cqltb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.946440 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22349677-a0b4-43a2-9a43-61b9bbd55eed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "22349677-a0b4-43a2-9a43-61b9bbd55eed" (UID: "22349677-a0b4-43a2-9a43-61b9bbd55eed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:34:33 crc kubenswrapper[4842]: I0202 07:34:32.999969 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqltb\" (UniqueName: \"kubernetes.io/projected/22349677-a0b4-43a2-9a43-61b9bbd55eed-kube-api-access-cqltb\") on node \"crc\" DevicePath \"\"" Feb 02 07:34:33 crc kubenswrapper[4842]: I0202 07:34:33.000038 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22349677-a0b4-43a2-9a43-61b9bbd55eed-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:34:33 crc kubenswrapper[4842]: I0202 07:34:33.000058 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22349677-a0b4-43a2-9a43-61b9bbd55eed-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:34:33 crc kubenswrapper[4842]: I0202 07:34:33.157154 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qp6vd"] Feb 02 07:34:33 crc kubenswrapper[4842]: I0202 07:34:33.168175 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qp6vd"] Feb 02 07:34:33 crc kubenswrapper[4842]: I0202 07:34:33.459728 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22349677-a0b4-43a2-9a43-61b9bbd55eed" path="/var/lib/kubelet/pods/22349677-a0b4-43a2-9a43-61b9bbd55eed/volumes" Feb 02 07:34:33 crc kubenswrapper[4842]: I0202 07:34:33.461526 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a5e892e-8cde-49ea-ad01-14593db40e0e" path="/var/lib/kubelet/pods/9a5e892e-8cde-49ea-ad01-14593db40e0e/volumes" Feb 02 07:34:36 crc kubenswrapper[4842]: I0202 07:34:36.345450 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:36 crc kubenswrapper[4842]: I0202 07:34:36.414393 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:37 crc kubenswrapper[4842]: I0202 07:34:37.162185 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2wsvb"] Feb 02 07:34:37 crc kubenswrapper[4842]: I0202 07:34:37.849359 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2wsvb" podUID="ab4626e6-200f-4cd6-937d-4eb7cf9911ab" containerName="registry-server" containerID="cri-o://78cc880c748f040750a27d09076e66f4d53b57a35bd9f70291d80b1021605008" gracePeriod=2 Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.324982 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.490588 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-utilities\") pod \"ab4626e6-200f-4cd6-937d-4eb7cf9911ab\" (UID: \"ab4626e6-200f-4cd6-937d-4eb7cf9911ab\") " Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.490734 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnztg\" (UniqueName: \"kubernetes.io/projected/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-kube-api-access-mnztg\") pod \"ab4626e6-200f-4cd6-937d-4eb7cf9911ab\" (UID: \"ab4626e6-200f-4cd6-937d-4eb7cf9911ab\") " Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.490812 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-catalog-content\") pod \"ab4626e6-200f-4cd6-937d-4eb7cf9911ab\" (UID: \"ab4626e6-200f-4cd6-937d-4eb7cf9911ab\") " Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.492883 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-utilities" (OuterVolumeSpecName: "utilities") pod "ab4626e6-200f-4cd6-937d-4eb7cf9911ab" (UID: "ab4626e6-200f-4cd6-937d-4eb7cf9911ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.496697 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-kube-api-access-mnztg" (OuterVolumeSpecName: "kube-api-access-mnztg") pod "ab4626e6-200f-4cd6-937d-4eb7cf9911ab" (UID: "ab4626e6-200f-4cd6-937d-4eb7cf9911ab"). InnerVolumeSpecName "kube-api-access-mnztg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.592752 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.592780 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnztg\" (UniqueName: \"kubernetes.io/projected/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-kube-api-access-mnztg\") on node \"crc\" DevicePath \"\"" Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.669436 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab4626e6-200f-4cd6-937d-4eb7cf9911ab" (UID: "ab4626e6-200f-4cd6-937d-4eb7cf9911ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.693924 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.862069 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.862132 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wsvb" event={"ID":"ab4626e6-200f-4cd6-937d-4eb7cf9911ab","Type":"ContainerDied","Data":"78cc880c748f040750a27d09076e66f4d53b57a35bd9f70291d80b1021605008"} Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.862203 4842 scope.go:117] "RemoveContainer" containerID="78cc880c748f040750a27d09076e66f4d53b57a35bd9f70291d80b1021605008" Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.862023 4842 generic.go:334] "Generic (PLEG): container finished" podID="ab4626e6-200f-4cd6-937d-4eb7cf9911ab" containerID="78cc880c748f040750a27d09076e66f4d53b57a35bd9f70291d80b1021605008" exitCode=0 Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.862535 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wsvb" event={"ID":"ab4626e6-200f-4cd6-937d-4eb7cf9911ab","Type":"ContainerDied","Data":"1f32afced739696c72206844574f32ea8877ddc224d52507ad2399e87f80a1d6"} Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.901704 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2wsvb"] Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.905081 4842 scope.go:117] "RemoveContainer" containerID="4a728162b812c701e40c30b6bfdb1e59fe43e20a5f66c0dea0e6c490f5f7c43b" Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.907803 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2wsvb"] Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.922838 4842 scope.go:117] "RemoveContainer" containerID="f2cc66db62cf6e553c069a58c3115b94b137acad42647a9788e36a837c71756c" Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.966453 4842 scope.go:117] "RemoveContainer" containerID="78cc880c748f040750a27d09076e66f4d53b57a35bd9f70291d80b1021605008" Feb 02 07:34:38 crc kubenswrapper[4842]: E0202 07:34:38.966997 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78cc880c748f040750a27d09076e66f4d53b57a35bd9f70291d80b1021605008\": container with ID starting with 78cc880c748f040750a27d09076e66f4d53b57a35bd9f70291d80b1021605008 not found: ID does not exist" containerID="78cc880c748f040750a27d09076e66f4d53b57a35bd9f70291d80b1021605008" Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.967046 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78cc880c748f040750a27d09076e66f4d53b57a35bd9f70291d80b1021605008"} err="failed to get container status \"78cc880c748f040750a27d09076e66f4d53b57a35bd9f70291d80b1021605008\": rpc error: code = NotFound desc = could not find container \"78cc880c748f040750a27d09076e66f4d53b57a35bd9f70291d80b1021605008\": container with ID starting with 78cc880c748f040750a27d09076e66f4d53b57a35bd9f70291d80b1021605008 not found: ID does not exist" Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.967070 4842 scope.go:117] "RemoveContainer" containerID="4a728162b812c701e40c30b6bfdb1e59fe43e20a5f66c0dea0e6c490f5f7c43b" Feb 02 07:34:38 crc kubenswrapper[4842]: E0202 07:34:38.967465 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a728162b812c701e40c30b6bfdb1e59fe43e20a5f66c0dea0e6c490f5f7c43b\": container with ID starting with 4a728162b812c701e40c30b6bfdb1e59fe43e20a5f66c0dea0e6c490f5f7c43b not found: ID does not exist" containerID="4a728162b812c701e40c30b6bfdb1e59fe43e20a5f66c0dea0e6c490f5f7c43b" Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.967489 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a728162b812c701e40c30b6bfdb1e59fe43e20a5f66c0dea0e6c490f5f7c43b"} err="failed to get container status \"4a728162b812c701e40c30b6bfdb1e59fe43e20a5f66c0dea0e6c490f5f7c43b\": rpc error: code = NotFound desc = could not find container \"4a728162b812c701e40c30b6bfdb1e59fe43e20a5f66c0dea0e6c490f5f7c43b\": container with ID starting with 4a728162b812c701e40c30b6bfdb1e59fe43e20a5f66c0dea0e6c490f5f7c43b not found: ID does not exist" Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.967504 4842 scope.go:117] "RemoveContainer" containerID="f2cc66db62cf6e553c069a58c3115b94b137acad42647a9788e36a837c71756c" Feb 02 07:34:38 crc kubenswrapper[4842]: E0202 07:34:38.967892 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2cc66db62cf6e553c069a58c3115b94b137acad42647a9788e36a837c71756c\": container with ID starting with f2cc66db62cf6e553c069a58c3115b94b137acad42647a9788e36a837c71756c not found: ID does not exist" containerID="f2cc66db62cf6e553c069a58c3115b94b137acad42647a9788e36a837c71756c" Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.967964 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2cc66db62cf6e553c069a58c3115b94b137acad42647a9788e36a837c71756c"} err="failed to get container status \"f2cc66db62cf6e553c069a58c3115b94b137acad42647a9788e36a837c71756c\": rpc error: code = NotFound desc = could not find container \"f2cc66db62cf6e553c069a58c3115b94b137acad42647a9788e36a837c71756c\": container with ID starting with f2cc66db62cf6e553c069a58c3115b94b137acad42647a9788e36a837c71756c not found: ID does not exist" Feb 02 07:34:39 crc kubenswrapper[4842]: I0202 07:34:39.467685 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab4626e6-200f-4cd6-937d-4eb7cf9911ab" path="/var/lib/kubelet/pods/ab4626e6-200f-4cd6-937d-4eb7cf9911ab/volumes" Feb 02 07:34:40 crc kubenswrapper[4842]: I0202 07:34:40.432982 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:34:40 crc kubenswrapper[4842]: E0202 07:34:40.433609 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:34:53 crc kubenswrapper[4842]: I0202 07:34:53.434440 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:34:53 crc kubenswrapper[4842]: E0202 07:34:53.435578 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:35:07 crc kubenswrapper[4842]: I0202 07:35:07.433562 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:35:07 crc kubenswrapper[4842]: E0202 07:35:07.434945 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:35:19 crc kubenswrapper[4842]: I0202 07:35:19.434151 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:35:19 crc kubenswrapper[4842]: E0202 07:35:19.435286 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:35:33 crc kubenswrapper[4842]: I0202 07:35:33.433702 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:35:33 crc kubenswrapper[4842]: E0202 07:35:33.434399 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:35:47 crc kubenswrapper[4842]: I0202 07:35:47.435174 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:35:47 crc kubenswrapper[4842]: E0202 07:35:47.436577 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:35:59 crc kubenswrapper[4842]: I0202 07:35:59.434413 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:35:59 crc kubenswrapper[4842]: E0202 07:35:59.435366 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:36:14 crc kubenswrapper[4842]: I0202 07:36:14.434350 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:36:14 crc kubenswrapper[4842]: E0202 07:36:14.435504 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:36:28 crc kubenswrapper[4842]: I0202 07:36:28.434624 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:36:28 crc kubenswrapper[4842]: E0202 07:36:28.435651 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:36:41 crc kubenswrapper[4842]: I0202 07:36:41.434538 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:36:41 crc kubenswrapper[4842]: E0202 07:36:41.435795 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:36:54 crc kubenswrapper[4842]: I0202 07:36:54.434285 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:36:54 crc kubenswrapper[4842]: E0202 07:36:54.434943 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:37:09 crc kubenswrapper[4842]: I0202 07:37:09.434006 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:37:09 crc kubenswrapper[4842]: E0202 07:37:09.434980 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:37:23 crc kubenswrapper[4842]: I0202 07:37:23.433882 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:37:23 crc kubenswrapper[4842]: E0202 07:37:23.434686 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:37:34 crc kubenswrapper[4842]: I0202 07:37:34.434156 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:37:34 crc kubenswrapper[4842]: E0202 07:37:34.435246 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:37:45 crc kubenswrapper[4842]: I0202 07:37:45.441033 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:37:45 crc kubenswrapper[4842]: E0202 07:37:45.441923 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:37:56 crc kubenswrapper[4842]: I0202 07:37:56.433968 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:37:56 crc kubenswrapper[4842]: E0202 07:37:56.435554 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:38:10 crc kubenswrapper[4842]: I0202 07:38:10.433760 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:38:10 crc kubenswrapper[4842]: E0202 07:38:10.434413 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:38:22 crc kubenswrapper[4842]: I0202 07:38:22.434423 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:38:22 crc kubenswrapper[4842]: E0202 07:38:22.435303 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:38:36 crc kubenswrapper[4842]: I0202 07:38:36.434029 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:38:36 crc kubenswrapper[4842]: E0202 07:38:36.435287 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:38:47 crc kubenswrapper[4842]: I0202 07:38:47.433961 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:38:48 crc kubenswrapper[4842]: I0202 07:38:48.109309 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"d04892349eecb502e1841b1180408fe7aa97060cc4ee71a56829833e1ef84e6d"} Feb 02 07:41:12 crc kubenswrapper[4842]: I0202 07:41:12.146104 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:41:12 crc kubenswrapper[4842]: I0202 07:41:12.147434 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:41:42 crc kubenswrapper[4842]: I0202 07:41:42.146533 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:41:42 crc kubenswrapper[4842]: I0202 07:41:42.147201 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:42:12 crc kubenswrapper[4842]: I0202 07:42:12.146515 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:42:12 crc kubenswrapper[4842]: I0202 07:42:12.147263 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:42:12 crc kubenswrapper[4842]: I0202 07:42:12.147328 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 07:42:12 crc kubenswrapper[4842]: I0202 07:42:12.148289 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d04892349eecb502e1841b1180408fe7aa97060cc4ee71a56829833e1ef84e6d"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 07:42:12 crc kubenswrapper[4842]: I0202 07:42:12.148384 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://d04892349eecb502e1841b1180408fe7aa97060cc4ee71a56829833e1ef84e6d" gracePeriod=600 Feb 02 07:42:12 crc kubenswrapper[4842]: I0202 07:42:12.968945 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="d04892349eecb502e1841b1180408fe7aa97060cc4ee71a56829833e1ef84e6d" exitCode=0 Feb 02 07:42:12 crc kubenswrapper[4842]: I0202 07:42:12.969038 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"d04892349eecb502e1841b1180408fe7aa97060cc4ee71a56829833e1ef84e6d"} Feb 02 07:42:12 crc kubenswrapper[4842]: I0202 07:42:12.969330 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a"} Feb 02 07:42:12 crc kubenswrapper[4842]: I0202 07:42:12.969352 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:44:12 crc kubenswrapper[4842]: I0202 07:44:12.146857 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:44:12 crc kubenswrapper[4842]: I0202 07:44:12.147657 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.819569 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d9hpw"] Feb 02 07:44:23 crc kubenswrapper[4842]: E0202 07:44:23.822374 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab4626e6-200f-4cd6-937d-4eb7cf9911ab" containerName="extract-utilities" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.822556 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab4626e6-200f-4cd6-937d-4eb7cf9911ab" containerName="extract-utilities" Feb 02 07:44:23 crc kubenswrapper[4842]: E0202 07:44:23.822686 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22349677-a0b4-43a2-9a43-61b9bbd55eed" containerName="registry-server" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.822816 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="22349677-a0b4-43a2-9a43-61b9bbd55eed" containerName="registry-server" Feb 02 07:44:23 crc kubenswrapper[4842]: E0202 07:44:23.822958 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a5e892e-8cde-49ea-ad01-14593db40e0e" containerName="registry-server" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.823085 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a5e892e-8cde-49ea-ad01-14593db40e0e" containerName="registry-server" Feb 02 07:44:23 crc kubenswrapper[4842]: E0202 07:44:23.823246 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a5e892e-8cde-49ea-ad01-14593db40e0e" containerName="extract-content" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.823381 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a5e892e-8cde-49ea-ad01-14593db40e0e" containerName="extract-content" Feb 02 07:44:23 crc kubenswrapper[4842]: E0202 07:44:23.823563 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab4626e6-200f-4cd6-937d-4eb7cf9911ab" containerName="registry-server" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.823703 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab4626e6-200f-4cd6-937d-4eb7cf9911ab" containerName="registry-server" Feb 02 07:44:23 crc kubenswrapper[4842]: E0202 07:44:23.823831 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab4626e6-200f-4cd6-937d-4eb7cf9911ab" containerName="extract-content" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.823945 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab4626e6-200f-4cd6-937d-4eb7cf9911ab" containerName="extract-content" Feb 02 07:44:23 crc kubenswrapper[4842]: E0202 07:44:23.824070 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22349677-a0b4-43a2-9a43-61b9bbd55eed" containerName="extract-content" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.824240 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="22349677-a0b4-43a2-9a43-61b9bbd55eed" containerName="extract-content" Feb 02 07:44:23 crc kubenswrapper[4842]: E0202 07:44:23.824401 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22349677-a0b4-43a2-9a43-61b9bbd55eed" containerName="extract-utilities" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.824519 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="22349677-a0b4-43a2-9a43-61b9bbd55eed" containerName="extract-utilities" Feb 02 07:44:23 crc kubenswrapper[4842]: E0202 07:44:23.824654 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a5e892e-8cde-49ea-ad01-14593db40e0e" containerName="extract-utilities" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.824779 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a5e892e-8cde-49ea-ad01-14593db40e0e" containerName="extract-utilities" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.825138 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="22349677-a0b4-43a2-9a43-61b9bbd55eed" containerName="registry-server" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.825308 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab4626e6-200f-4cd6-937d-4eb7cf9911ab" containerName="registry-server" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.825446 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a5e892e-8cde-49ea-ad01-14593db40e0e" containerName="registry-server" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.827379 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d9hpw" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.833772 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d9hpw"] Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.942930 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6af4d552-478d-4a9f-8fcb-8a4b30a29f61-utilities\") pod \"community-operators-d9hpw\" (UID: \"6af4d552-478d-4a9f-8fcb-8a4b30a29f61\") " pod="openshift-marketplace/community-operators-d9hpw" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.943005 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8d2w\" (UniqueName: \"kubernetes.io/projected/6af4d552-478d-4a9f-8fcb-8a4b30a29f61-kube-api-access-l8d2w\") pod \"community-operators-d9hpw\" (UID: \"6af4d552-478d-4a9f-8fcb-8a4b30a29f61\") " pod="openshift-marketplace/community-operators-d9hpw" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.943086 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6af4d552-478d-4a9f-8fcb-8a4b30a29f61-catalog-content\") pod \"community-operators-d9hpw\" (UID: \"6af4d552-478d-4a9f-8fcb-8a4b30a29f61\") " pod="openshift-marketplace/community-operators-d9hpw" Feb 02 07:44:24 crc kubenswrapper[4842]: I0202 07:44:24.044201 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6af4d552-478d-4a9f-8fcb-8a4b30a29f61-catalog-content\") pod \"community-operators-d9hpw\" (UID: \"6af4d552-478d-4a9f-8fcb-8a4b30a29f61\") " pod="openshift-marketplace/community-operators-d9hpw" Feb 02 07:44:24 crc kubenswrapper[4842]: I0202 07:44:24.044314 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6af4d552-478d-4a9f-8fcb-8a4b30a29f61-utilities\") pod \"community-operators-d9hpw\" (UID: \"6af4d552-478d-4a9f-8fcb-8a4b30a29f61\") " pod="openshift-marketplace/community-operators-d9hpw" Feb 02 07:44:24 crc kubenswrapper[4842]: I0202 07:44:24.044362 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8d2w\" (UniqueName: \"kubernetes.io/projected/6af4d552-478d-4a9f-8fcb-8a4b30a29f61-kube-api-access-l8d2w\") pod \"community-operators-d9hpw\" (UID: \"6af4d552-478d-4a9f-8fcb-8a4b30a29f61\") " pod="openshift-marketplace/community-operators-d9hpw" Feb 02 07:44:24 crc kubenswrapper[4842]: I0202 07:44:24.045323 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6af4d552-478d-4a9f-8fcb-8a4b30a29f61-utilities\") pod \"community-operators-d9hpw\" (UID: \"6af4d552-478d-4a9f-8fcb-8a4b30a29f61\") " pod="openshift-marketplace/community-operators-d9hpw" Feb 02 07:44:24 crc kubenswrapper[4842]: I0202 07:44:24.045569 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6af4d552-478d-4a9f-8fcb-8a4b30a29f61-catalog-content\") pod \"community-operators-d9hpw\" (UID: \"6af4d552-478d-4a9f-8fcb-8a4b30a29f61\") " pod="openshift-marketplace/community-operators-d9hpw" Feb 02 07:44:24 crc kubenswrapper[4842]: I0202 07:44:24.065798 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8d2w\" (UniqueName: \"kubernetes.io/projected/6af4d552-478d-4a9f-8fcb-8a4b30a29f61-kube-api-access-l8d2w\") pod \"community-operators-d9hpw\" (UID: \"6af4d552-478d-4a9f-8fcb-8a4b30a29f61\") " pod="openshift-marketplace/community-operators-d9hpw" Feb 02 07:44:24 crc kubenswrapper[4842]: I0202 07:44:24.162359 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d9hpw" Feb 02 07:44:24 crc kubenswrapper[4842]: I0202 07:44:24.662513 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d9hpw"] Feb 02 07:44:24 crc kubenswrapper[4842]: W0202 07:44:24.673964 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6af4d552_478d_4a9f_8fcb_8a4b30a29f61.slice/crio-60c650e2c5c0dd4deceef32acedcced2c21f22e268da13b482e9c1b7e96dab5a WatchSource:0}: Error finding container 60c650e2c5c0dd4deceef32acedcced2c21f22e268da13b482e9c1b7e96dab5a: Status 404 returned error can't find the container with id 60c650e2c5c0dd4deceef32acedcced2c21f22e268da13b482e9c1b7e96dab5a Feb 02 07:44:25 crc kubenswrapper[4842]: I0202 07:44:25.173047 4842 generic.go:334] "Generic (PLEG): container finished" podID="6af4d552-478d-4a9f-8fcb-8a4b30a29f61" containerID="13d4228704a796faa071a0142ccf878d2f1cc2ea93c1f3316e9ce309bc8be98e" exitCode=0 Feb 02 07:44:25 crc kubenswrapper[4842]: I0202 07:44:25.173105 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d9hpw" event={"ID":"6af4d552-478d-4a9f-8fcb-8a4b30a29f61","Type":"ContainerDied","Data":"13d4228704a796faa071a0142ccf878d2f1cc2ea93c1f3316e9ce309bc8be98e"} Feb 02 07:44:25 crc kubenswrapper[4842]: I0202 07:44:25.173146 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d9hpw" event={"ID":"6af4d552-478d-4a9f-8fcb-8a4b30a29f61","Type":"ContainerStarted","Data":"60c650e2c5c0dd4deceef32acedcced2c21f22e268da13b482e9c1b7e96dab5a"} Feb 02 07:44:25 crc kubenswrapper[4842]: I0202 07:44:25.176365 4842 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 07:44:29 crc kubenswrapper[4842]: I0202 07:44:29.208116 4842 generic.go:334] "Generic (PLEG): container finished" podID="6af4d552-478d-4a9f-8fcb-8a4b30a29f61" containerID="d5522e370ac81656c4e7bcc3c2662c52297296f66ad1208fdb616b69ac366536" exitCode=0 Feb 02 07:44:29 crc kubenswrapper[4842]: I0202 07:44:29.208265 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d9hpw" event={"ID":"6af4d552-478d-4a9f-8fcb-8a4b30a29f61","Type":"ContainerDied","Data":"d5522e370ac81656c4e7bcc3c2662c52297296f66ad1208fdb616b69ac366536"} Feb 02 07:44:30 crc kubenswrapper[4842]: I0202 07:44:30.221397 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d9hpw" event={"ID":"6af4d552-478d-4a9f-8fcb-8a4b30a29f61","Type":"ContainerStarted","Data":"97c92a333bf2b3560732267b9b0e9d19422683f2fb0af959ea259e7a17893cde"} Feb 02 07:44:34 crc kubenswrapper[4842]: I0202 07:44:34.163280 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d9hpw" Feb 02 07:44:34 crc kubenswrapper[4842]: I0202 07:44:34.163648 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d9hpw" Feb 02 07:44:34 crc kubenswrapper[4842]: I0202 07:44:34.242635 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d9hpw" Feb 02 07:44:34 crc kubenswrapper[4842]: I0202 07:44:34.277929 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d9hpw" podStartSLOduration=6.8101695079999995 podStartE2EDuration="11.277909122s" podCreationTimestamp="2026-02-02 07:44:23 +0000 UTC" firstStartedPulling="2026-02-02 07:44:25.175870213 +0000 UTC m=+3490.553138165" lastFinishedPulling="2026-02-02 07:44:29.643609867 +0000 UTC m=+3495.020877779" observedRunningTime="2026-02-02 07:44:30.251340509 +0000 UTC m=+3495.628608481" watchObservedRunningTime="2026-02-02 07:44:34.277909122 +0000 UTC m=+3499.655177034" Feb 02 07:44:34 crc kubenswrapper[4842]: I0202 07:44:34.322014 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d9hpw" Feb 02 07:44:34 crc kubenswrapper[4842]: I0202 07:44:34.397788 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d9hpw"] Feb 02 07:44:34 crc kubenswrapper[4842]: I0202 07:44:34.493949 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7hg8l"] Feb 02 07:44:34 crc kubenswrapper[4842]: I0202 07:44:34.494250 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7hg8l" podUID="79d21de2-d86f-4434-a132-ac1e81b63377" containerName="registry-server" containerID="cri-o://05f81fbc41c88618dbdb1297884184318cd51122953e7bb58e8a90a529418d52" gracePeriod=2 Feb 02 07:44:34 crc kubenswrapper[4842]: E0202 07:44:34.696915 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79d21de2_d86f_4434_a132_ac1e81b63377.slice/crio-conmon-05f81fbc41c88618dbdb1297884184318cd51122953e7bb58e8a90a529418d52.scope\": RecentStats: unable to find data in memory cache]" Feb 02 07:44:34 crc kubenswrapper[4842]: I0202 07:44:34.910614 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7hg8l" Feb 02 07:44:34 crc kubenswrapper[4842]: I0202 07:44:34.942694 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfhk4\" (UniqueName: \"kubernetes.io/projected/79d21de2-d86f-4434-a132-ac1e81b63377-kube-api-access-dfhk4\") pod \"79d21de2-d86f-4434-a132-ac1e81b63377\" (UID: \"79d21de2-d86f-4434-a132-ac1e81b63377\") " Feb 02 07:44:34 crc kubenswrapper[4842]: I0202 07:44:34.942817 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79d21de2-d86f-4434-a132-ac1e81b63377-utilities\") pod \"79d21de2-d86f-4434-a132-ac1e81b63377\" (UID: \"79d21de2-d86f-4434-a132-ac1e81b63377\") " Feb 02 07:44:34 crc kubenswrapper[4842]: I0202 07:44:34.942856 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79d21de2-d86f-4434-a132-ac1e81b63377-catalog-content\") pod \"79d21de2-d86f-4434-a132-ac1e81b63377\" (UID: \"79d21de2-d86f-4434-a132-ac1e81b63377\") " Feb 02 07:44:34 crc kubenswrapper[4842]: I0202 07:44:34.944530 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79d21de2-d86f-4434-a132-ac1e81b63377-utilities" (OuterVolumeSpecName: "utilities") pod "79d21de2-d86f-4434-a132-ac1e81b63377" (UID: "79d21de2-d86f-4434-a132-ac1e81b63377"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:44:34 crc kubenswrapper[4842]: I0202 07:44:34.968881 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79d21de2-d86f-4434-a132-ac1e81b63377-kube-api-access-dfhk4" (OuterVolumeSpecName: "kube-api-access-dfhk4") pod "79d21de2-d86f-4434-a132-ac1e81b63377" (UID: "79d21de2-d86f-4434-a132-ac1e81b63377"). InnerVolumeSpecName "kube-api-access-dfhk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.002521 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79d21de2-d86f-4434-a132-ac1e81b63377-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79d21de2-d86f-4434-a132-ac1e81b63377" (UID: "79d21de2-d86f-4434-a132-ac1e81b63377"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.043512 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79d21de2-d86f-4434-a132-ac1e81b63377-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.043543 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79d21de2-d86f-4434-a132-ac1e81b63377-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.043557 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfhk4\" (UniqueName: \"kubernetes.io/projected/79d21de2-d86f-4434-a132-ac1e81b63377-kube-api-access-dfhk4\") on node \"crc\" DevicePath \"\"" Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.269851 4842 generic.go:334] "Generic (PLEG): container finished" podID="79d21de2-d86f-4434-a132-ac1e81b63377" containerID="05f81fbc41c88618dbdb1297884184318cd51122953e7bb58e8a90a529418d52" exitCode=0 Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.269936 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7hg8l" Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.269977 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hg8l" event={"ID":"79d21de2-d86f-4434-a132-ac1e81b63377","Type":"ContainerDied","Data":"05f81fbc41c88618dbdb1297884184318cd51122953e7bb58e8a90a529418d52"} Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.270056 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hg8l" event={"ID":"79d21de2-d86f-4434-a132-ac1e81b63377","Type":"ContainerDied","Data":"2d2ab29782781bce630b9b1ec33d723639705b917f6488a85a84e3a08847027a"} Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.270107 4842 scope.go:117] "RemoveContainer" containerID="05f81fbc41c88618dbdb1297884184318cd51122953e7bb58e8a90a529418d52" Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.325375 4842 scope.go:117] "RemoveContainer" containerID="0c604a9a803c123935122e17db80cd4fc1952e426889feeace08fef5229b2809" Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.336027 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7hg8l"] Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.346779 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7hg8l"] Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.352131 4842 scope.go:117] "RemoveContainer" containerID="29c357120ba115af17ef113f35ab6e72d332e8c44501980f8bf1853410154a74" Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.377208 4842 scope.go:117] "RemoveContainer" containerID="05f81fbc41c88618dbdb1297884184318cd51122953e7bb58e8a90a529418d52" Feb 02 07:44:35 crc kubenswrapper[4842]: E0202 07:44:35.377760 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05f81fbc41c88618dbdb1297884184318cd51122953e7bb58e8a90a529418d52\": container with ID starting with 05f81fbc41c88618dbdb1297884184318cd51122953e7bb58e8a90a529418d52 not found: ID does not exist" containerID="05f81fbc41c88618dbdb1297884184318cd51122953e7bb58e8a90a529418d52" Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.377814 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05f81fbc41c88618dbdb1297884184318cd51122953e7bb58e8a90a529418d52"} err="failed to get container status \"05f81fbc41c88618dbdb1297884184318cd51122953e7bb58e8a90a529418d52\": rpc error: code = NotFound desc = could not find container \"05f81fbc41c88618dbdb1297884184318cd51122953e7bb58e8a90a529418d52\": container with ID starting with 05f81fbc41c88618dbdb1297884184318cd51122953e7bb58e8a90a529418d52 not found: ID does not exist" Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.377844 4842 scope.go:117] "RemoveContainer" containerID="0c604a9a803c123935122e17db80cd4fc1952e426889feeace08fef5229b2809" Feb 02 07:44:35 crc kubenswrapper[4842]: E0202 07:44:35.378652 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c604a9a803c123935122e17db80cd4fc1952e426889feeace08fef5229b2809\": container with ID starting with 0c604a9a803c123935122e17db80cd4fc1952e426889feeace08fef5229b2809 not found: ID does not exist" containerID="0c604a9a803c123935122e17db80cd4fc1952e426889feeace08fef5229b2809" Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.378712 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c604a9a803c123935122e17db80cd4fc1952e426889feeace08fef5229b2809"} err="failed to get container status \"0c604a9a803c123935122e17db80cd4fc1952e426889feeace08fef5229b2809\": rpc error: code = NotFound desc = could not find container \"0c604a9a803c123935122e17db80cd4fc1952e426889feeace08fef5229b2809\": container with ID starting with 0c604a9a803c123935122e17db80cd4fc1952e426889feeace08fef5229b2809 not found: ID does not exist" Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.378753 4842 scope.go:117] "RemoveContainer" containerID="29c357120ba115af17ef113f35ab6e72d332e8c44501980f8bf1853410154a74" Feb 02 07:44:35 crc kubenswrapper[4842]: E0202 07:44:35.379439 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29c357120ba115af17ef113f35ab6e72d332e8c44501980f8bf1853410154a74\": container with ID starting with 29c357120ba115af17ef113f35ab6e72d332e8c44501980f8bf1853410154a74 not found: ID does not exist" containerID="29c357120ba115af17ef113f35ab6e72d332e8c44501980f8bf1853410154a74" Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.379474 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29c357120ba115af17ef113f35ab6e72d332e8c44501980f8bf1853410154a74"} err="failed to get container status \"29c357120ba115af17ef113f35ab6e72d332e8c44501980f8bf1853410154a74\": rpc error: code = NotFound desc = could not find container \"29c357120ba115af17ef113f35ab6e72d332e8c44501980f8bf1853410154a74\": container with ID starting with 29c357120ba115af17ef113f35ab6e72d332e8c44501980f8bf1853410154a74 not found: ID does not exist" Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.446897 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79d21de2-d86f-4434-a132-ac1e81b63377" path="/var/lib/kubelet/pods/79d21de2-d86f-4434-a132-ac1e81b63377/volumes" Feb 02 07:44:42 crc kubenswrapper[4842]: I0202 07:44:42.146577 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:44:42 crc kubenswrapper[4842]: I0202 07:44:42.147482 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.403352 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s5zkp"] Feb 02 07:44:43 crc kubenswrapper[4842]: E0202 07:44:43.403710 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d21de2-d86f-4434-a132-ac1e81b63377" containerName="registry-server" Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.403725 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d21de2-d86f-4434-a132-ac1e81b63377" containerName="registry-server" Feb 02 07:44:43 crc kubenswrapper[4842]: E0202 07:44:43.403740 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d21de2-d86f-4434-a132-ac1e81b63377" containerName="extract-content" Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.403748 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d21de2-d86f-4434-a132-ac1e81b63377" containerName="extract-content" Feb 02 07:44:43 crc kubenswrapper[4842]: E0202 07:44:43.403762 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d21de2-d86f-4434-a132-ac1e81b63377" containerName="extract-utilities" Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.403772 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d21de2-d86f-4434-a132-ac1e81b63377" containerName="extract-utilities" Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.403942 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="79d21de2-d86f-4434-a132-ac1e81b63377" containerName="registry-server" Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.405508 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.419547 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5zkp"] Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.431682 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9lbj\" (UniqueName: \"kubernetes.io/projected/c164b1b9-c3c4-403d-9000-6a49460db9de-kube-api-access-k9lbj\") pod \"redhat-marketplace-s5zkp\" (UID: \"c164b1b9-c3c4-403d-9000-6a49460db9de\") " pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.431759 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c164b1b9-c3c4-403d-9000-6a49460db9de-catalog-content\") pod \"redhat-marketplace-s5zkp\" (UID: \"c164b1b9-c3c4-403d-9000-6a49460db9de\") " pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.431819 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c164b1b9-c3c4-403d-9000-6a49460db9de-utilities\") pod \"redhat-marketplace-s5zkp\" (UID: \"c164b1b9-c3c4-403d-9000-6a49460db9de\") " pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.532920 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9lbj\" (UniqueName: \"kubernetes.io/projected/c164b1b9-c3c4-403d-9000-6a49460db9de-kube-api-access-k9lbj\") pod \"redhat-marketplace-s5zkp\" (UID: \"c164b1b9-c3c4-403d-9000-6a49460db9de\") " pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.533861 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c164b1b9-c3c4-403d-9000-6a49460db9de-catalog-content\") pod \"redhat-marketplace-s5zkp\" (UID: \"c164b1b9-c3c4-403d-9000-6a49460db9de\") " pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.534735 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c164b1b9-c3c4-403d-9000-6a49460db9de-utilities\") pod \"redhat-marketplace-s5zkp\" (UID: \"c164b1b9-c3c4-403d-9000-6a49460db9de\") " pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.535611 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c164b1b9-c3c4-403d-9000-6a49460db9de-catalog-content\") pod \"redhat-marketplace-s5zkp\" (UID: \"c164b1b9-c3c4-403d-9000-6a49460db9de\") " pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.535674 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c164b1b9-c3c4-403d-9000-6a49460db9de-utilities\") pod \"redhat-marketplace-s5zkp\" (UID: \"c164b1b9-c3c4-403d-9000-6a49460db9de\") " pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.560608 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9lbj\" (UniqueName: \"kubernetes.io/projected/c164b1b9-c3c4-403d-9000-6a49460db9de-kube-api-access-k9lbj\") pod \"redhat-marketplace-s5zkp\" (UID: \"c164b1b9-c3c4-403d-9000-6a49460db9de\") " pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.742750 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:44 crc kubenswrapper[4842]: I0202 07:44:44.272056 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5zkp"] Feb 02 07:44:44 crc kubenswrapper[4842]: I0202 07:44:44.357843 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5zkp" event={"ID":"c164b1b9-c3c4-403d-9000-6a49460db9de","Type":"ContainerStarted","Data":"9a4c47ec4eecaaf32b1d0cd388f9d248ff0d88afb81bbd7742ee19fbee20f67d"} Feb 02 07:44:45 crc kubenswrapper[4842]: I0202 07:44:45.373342 4842 generic.go:334] "Generic (PLEG): container finished" podID="c164b1b9-c3c4-403d-9000-6a49460db9de" containerID="0060c14970e9770ee15974169d1a16a0f40ec75bb06def287ad54921de0bc126" exitCode=0 Feb 02 07:44:45 crc kubenswrapper[4842]: I0202 07:44:45.373460 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5zkp" event={"ID":"c164b1b9-c3c4-403d-9000-6a49460db9de","Type":"ContainerDied","Data":"0060c14970e9770ee15974169d1a16a0f40ec75bb06def287ad54921de0bc126"} Feb 02 07:44:47 crc kubenswrapper[4842]: I0202 07:44:47.413556 4842 generic.go:334] "Generic (PLEG): container finished" podID="c164b1b9-c3c4-403d-9000-6a49460db9de" containerID="afb2f2590980251f385bfa41864d1ec6439d5ad46cfc99d6fba6cb46436aeb04" exitCode=0 Feb 02 07:44:47 crc kubenswrapper[4842]: I0202 07:44:47.414175 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5zkp" event={"ID":"c164b1b9-c3c4-403d-9000-6a49460db9de","Type":"ContainerDied","Data":"afb2f2590980251f385bfa41864d1ec6439d5ad46cfc99d6fba6cb46436aeb04"} Feb 02 07:44:48 crc kubenswrapper[4842]: I0202 07:44:48.424198 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5zkp" event={"ID":"c164b1b9-c3c4-403d-9000-6a49460db9de","Type":"ContainerStarted","Data":"dea646af9bd267fbe69b814a5ac440cb747701d180d86f0a889410c2f6550cfb"} Feb 02 07:44:48 crc kubenswrapper[4842]: I0202 07:44:48.455344 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s5zkp" podStartSLOduration=2.9526222669999997 podStartE2EDuration="5.455317805s" podCreationTimestamp="2026-02-02 07:44:43 +0000 UTC" firstStartedPulling="2026-02-02 07:44:45.375816601 +0000 UTC m=+3510.753084543" lastFinishedPulling="2026-02-02 07:44:47.878512129 +0000 UTC m=+3513.255780081" observedRunningTime="2026-02-02 07:44:48.448615879 +0000 UTC m=+3513.825883801" watchObservedRunningTime="2026-02-02 07:44:48.455317805 +0000 UTC m=+3513.832585747" Feb 02 07:44:53 crc kubenswrapper[4842]: I0202 07:44:53.743146 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:53 crc kubenswrapper[4842]: I0202 07:44:53.743802 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:53 crc kubenswrapper[4842]: I0202 07:44:53.821257 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:54 crc kubenswrapper[4842]: I0202 07:44:54.556277 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:55 crc kubenswrapper[4842]: I0202 07:44:55.027533 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5zkp"] Feb 02 07:44:56 crc kubenswrapper[4842]: I0202 07:44:56.496899 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s5zkp" podUID="c164b1b9-c3c4-403d-9000-6a49460db9de" containerName="registry-server" containerID="cri-o://dea646af9bd267fbe69b814a5ac440cb747701d180d86f0a889410c2f6550cfb" gracePeriod=2 Feb 02 07:44:56 crc kubenswrapper[4842]: I0202 07:44:56.894776 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.070437 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9lbj\" (UniqueName: \"kubernetes.io/projected/c164b1b9-c3c4-403d-9000-6a49460db9de-kube-api-access-k9lbj\") pod \"c164b1b9-c3c4-403d-9000-6a49460db9de\" (UID: \"c164b1b9-c3c4-403d-9000-6a49460db9de\") " Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.070915 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c164b1b9-c3c4-403d-9000-6a49460db9de-catalog-content\") pod \"c164b1b9-c3c4-403d-9000-6a49460db9de\" (UID: \"c164b1b9-c3c4-403d-9000-6a49460db9de\") " Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.071009 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c164b1b9-c3c4-403d-9000-6a49460db9de-utilities\") pod \"c164b1b9-c3c4-403d-9000-6a49460db9de\" (UID: \"c164b1b9-c3c4-403d-9000-6a49460db9de\") " Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.072177 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c164b1b9-c3c4-403d-9000-6a49460db9de-utilities" (OuterVolumeSpecName: "utilities") pod "c164b1b9-c3c4-403d-9000-6a49460db9de" (UID: "c164b1b9-c3c4-403d-9000-6a49460db9de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.080409 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c164b1b9-c3c4-403d-9000-6a49460db9de-kube-api-access-k9lbj" (OuterVolumeSpecName: "kube-api-access-k9lbj") pod "c164b1b9-c3c4-403d-9000-6a49460db9de" (UID: "c164b1b9-c3c4-403d-9000-6a49460db9de"). InnerVolumeSpecName "kube-api-access-k9lbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.114984 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c164b1b9-c3c4-403d-9000-6a49460db9de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c164b1b9-c3c4-403d-9000-6a49460db9de" (UID: "c164b1b9-c3c4-403d-9000-6a49460db9de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.174575 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9lbj\" (UniqueName: \"kubernetes.io/projected/c164b1b9-c3c4-403d-9000-6a49460db9de-kube-api-access-k9lbj\") on node \"crc\" DevicePath \"\"" Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.174853 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c164b1b9-c3c4-403d-9000-6a49460db9de-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.174878 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c164b1b9-c3c4-403d-9000-6a49460db9de-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.508745 4842 generic.go:334] "Generic (PLEG): container finished" podID="c164b1b9-c3c4-403d-9000-6a49460db9de" containerID="dea646af9bd267fbe69b814a5ac440cb747701d180d86f0a889410c2f6550cfb" exitCode=0 Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.508820 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5zkp" event={"ID":"c164b1b9-c3c4-403d-9000-6a49460db9de","Type":"ContainerDied","Data":"dea646af9bd267fbe69b814a5ac440cb747701d180d86f0a889410c2f6550cfb"} Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.508877 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5zkp" event={"ID":"c164b1b9-c3c4-403d-9000-6a49460db9de","Type":"ContainerDied","Data":"9a4c47ec4eecaaf32b1d0cd388f9d248ff0d88afb81bbd7742ee19fbee20f67d"} Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.508920 4842 scope.go:117] "RemoveContainer" containerID="dea646af9bd267fbe69b814a5ac440cb747701d180d86f0a889410c2f6550cfb" Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.508991 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.543309 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5zkp"] Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.549501 4842 scope.go:117] "RemoveContainer" containerID="afb2f2590980251f385bfa41864d1ec6439d5ad46cfc99d6fba6cb46436aeb04" Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.550915 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5zkp"] Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.576912 4842 scope.go:117] "RemoveContainer" containerID="0060c14970e9770ee15974169d1a16a0f40ec75bb06def287ad54921de0bc126" Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.605792 4842 scope.go:117] "RemoveContainer" containerID="dea646af9bd267fbe69b814a5ac440cb747701d180d86f0a889410c2f6550cfb" Feb 02 07:44:57 crc kubenswrapper[4842]: E0202 07:44:57.606325 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dea646af9bd267fbe69b814a5ac440cb747701d180d86f0a889410c2f6550cfb\": container with ID starting with dea646af9bd267fbe69b814a5ac440cb747701d180d86f0a889410c2f6550cfb not found: ID does not exist" containerID="dea646af9bd267fbe69b814a5ac440cb747701d180d86f0a889410c2f6550cfb" Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.606364 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dea646af9bd267fbe69b814a5ac440cb747701d180d86f0a889410c2f6550cfb"} err="failed to get container status \"dea646af9bd267fbe69b814a5ac440cb747701d180d86f0a889410c2f6550cfb\": rpc error: code = NotFound desc = could not find container \"dea646af9bd267fbe69b814a5ac440cb747701d180d86f0a889410c2f6550cfb\": container with ID starting with dea646af9bd267fbe69b814a5ac440cb747701d180d86f0a889410c2f6550cfb not found: ID does not exist" Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.606392 4842 scope.go:117] "RemoveContainer" containerID="afb2f2590980251f385bfa41864d1ec6439d5ad46cfc99d6fba6cb46436aeb04" Feb 02 07:44:57 crc kubenswrapper[4842]: E0202 07:44:57.606769 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afb2f2590980251f385bfa41864d1ec6439d5ad46cfc99d6fba6cb46436aeb04\": container with ID starting with afb2f2590980251f385bfa41864d1ec6439d5ad46cfc99d6fba6cb46436aeb04 not found: ID does not exist" containerID="afb2f2590980251f385bfa41864d1ec6439d5ad46cfc99d6fba6cb46436aeb04" Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.606799 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afb2f2590980251f385bfa41864d1ec6439d5ad46cfc99d6fba6cb46436aeb04"} err="failed to get container status \"afb2f2590980251f385bfa41864d1ec6439d5ad46cfc99d6fba6cb46436aeb04\": rpc error: code = NotFound desc = could not find container \"afb2f2590980251f385bfa41864d1ec6439d5ad46cfc99d6fba6cb46436aeb04\": container with ID starting with afb2f2590980251f385bfa41864d1ec6439d5ad46cfc99d6fba6cb46436aeb04 not found: ID does not exist" Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.606819 4842 scope.go:117] "RemoveContainer" containerID="0060c14970e9770ee15974169d1a16a0f40ec75bb06def287ad54921de0bc126" Feb 02 07:44:57 crc kubenswrapper[4842]: E0202 07:44:57.607057 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0060c14970e9770ee15974169d1a16a0f40ec75bb06def287ad54921de0bc126\": container with ID starting with 0060c14970e9770ee15974169d1a16a0f40ec75bb06def287ad54921de0bc126 not found: ID does not exist" containerID="0060c14970e9770ee15974169d1a16a0f40ec75bb06def287ad54921de0bc126" Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.607089 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0060c14970e9770ee15974169d1a16a0f40ec75bb06def287ad54921de0bc126"} err="failed to get container status \"0060c14970e9770ee15974169d1a16a0f40ec75bb06def287ad54921de0bc126\": rpc error: code = NotFound desc = could not find container \"0060c14970e9770ee15974169d1a16a0f40ec75bb06def287ad54921de0bc126\": container with ID starting with 0060c14970e9770ee15974169d1a16a0f40ec75bb06def287ad54921de0bc126 not found: ID does not exist" Feb 02 07:44:59 crc kubenswrapper[4842]: I0202 07:44:59.450257 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c164b1b9-c3c4-403d-9000-6a49460db9de" path="/var/lib/kubelet/pods/c164b1b9-c3c4-403d-9000-6a49460db9de/volumes" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.154505 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn"] Feb 02 07:45:00 crc kubenswrapper[4842]: E0202 07:45:00.154868 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c164b1b9-c3c4-403d-9000-6a49460db9de" containerName="extract-utilities" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.154885 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c164b1b9-c3c4-403d-9000-6a49460db9de" containerName="extract-utilities" Feb 02 07:45:00 crc kubenswrapper[4842]: E0202 07:45:00.154913 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c164b1b9-c3c4-403d-9000-6a49460db9de" containerName="registry-server" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.154921 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c164b1b9-c3c4-403d-9000-6a49460db9de" containerName="registry-server" Feb 02 07:45:00 crc kubenswrapper[4842]: E0202 07:45:00.154939 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c164b1b9-c3c4-403d-9000-6a49460db9de" containerName="extract-content" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.154947 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c164b1b9-c3c4-403d-9000-6a49460db9de" containerName="extract-content" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.155112 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="c164b1b9-c3c4-403d-9000-6a49460db9de" containerName="registry-server" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.155748 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.158535 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.158786 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.168457 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn"] Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.340054 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-secret-volume\") pod \"collect-profiles-29500305-fx7vn\" (UID: \"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.340134 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klmzz\" (UniqueName: \"kubernetes.io/projected/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-kube-api-access-klmzz\") pod \"collect-profiles-29500305-fx7vn\" (UID: \"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.340231 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-config-volume\") pod \"collect-profiles-29500305-fx7vn\" (UID: \"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.441047 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-config-volume\") pod \"collect-profiles-29500305-fx7vn\" (UID: \"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.441121 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-secret-volume\") pod \"collect-profiles-29500305-fx7vn\" (UID: \"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.441197 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klmzz\" (UniqueName: \"kubernetes.io/projected/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-kube-api-access-klmzz\") pod \"collect-profiles-29500305-fx7vn\" (UID: \"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.441843 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-config-volume\") pod \"collect-profiles-29500305-fx7vn\" (UID: \"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.453971 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-secret-volume\") pod \"collect-profiles-29500305-fx7vn\" (UID: \"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.468029 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klmzz\" (UniqueName: \"kubernetes.io/projected/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-kube-api-access-klmzz\") pod \"collect-profiles-29500305-fx7vn\" (UID: \"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.505869 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.934419 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn"] Feb 02 07:45:00 crc kubenswrapper[4842]: W0202 07:45:00.946379 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7aa0f9fa_efa5_4afa_bce6_88ca1eeef6b6.slice/crio-85f47b9014f6d0f4127be20e8d5d0f4d7db7ebb54a0a6452db42303cd38497a9 WatchSource:0}: Error finding container 85f47b9014f6d0f4127be20e8d5d0f4d7db7ebb54a0a6452db42303cd38497a9: Status 404 returned error can't find the container with id 85f47b9014f6d0f4127be20e8d5d0f4d7db7ebb54a0a6452db42303cd38497a9 Feb 02 07:45:01 crc kubenswrapper[4842]: I0202 07:45:01.563286 4842 generic.go:334] "Generic (PLEG): container finished" podID="7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6" containerID="9c91867e37901f6b77d290214bde0cb71563f9ff02b28875bfa2c96b8d680083" exitCode=0 Feb 02 07:45:01 crc kubenswrapper[4842]: I0202 07:45:01.563338 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn" event={"ID":"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6","Type":"ContainerDied","Data":"9c91867e37901f6b77d290214bde0cb71563f9ff02b28875bfa2c96b8d680083"} Feb 02 07:45:01 crc kubenswrapper[4842]: I0202 07:45:01.563516 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn" event={"ID":"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6","Type":"ContainerStarted","Data":"85f47b9014f6d0f4127be20e8d5d0f4d7db7ebb54a0a6452db42303cd38497a9"} Feb 02 07:45:02 crc kubenswrapper[4842]: I0202 07:45:02.953864 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn" Feb 02 07:45:03 crc kubenswrapper[4842]: I0202 07:45:03.083019 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-config-volume\") pod \"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6\" (UID: \"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6\") " Feb 02 07:45:03 crc kubenswrapper[4842]: I0202 07:45:03.083130 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-secret-volume\") pod \"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6\" (UID: \"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6\") " Feb 02 07:45:03 crc kubenswrapper[4842]: I0202 07:45:03.083173 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klmzz\" (UniqueName: \"kubernetes.io/projected/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-kube-api-access-klmzz\") pod \"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6\" (UID: \"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6\") " Feb 02 07:45:03 crc kubenswrapper[4842]: I0202 07:45:03.084138 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-config-volume" (OuterVolumeSpecName: "config-volume") pod "7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6" (UID: "7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:45:03 crc kubenswrapper[4842]: I0202 07:45:03.088214 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6" (UID: "7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:45:03 crc kubenswrapper[4842]: I0202 07:45:03.088368 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-kube-api-access-klmzz" (OuterVolumeSpecName: "kube-api-access-klmzz") pod "7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6" (UID: "7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6"). InnerVolumeSpecName "kube-api-access-klmzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:45:03 crc kubenswrapper[4842]: I0202 07:45:03.184307 4842 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 07:45:03 crc kubenswrapper[4842]: I0202 07:45:03.184340 4842 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 07:45:03 crc kubenswrapper[4842]: I0202 07:45:03.184352 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klmzz\" (UniqueName: \"kubernetes.io/projected/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-kube-api-access-klmzz\") on node \"crc\" DevicePath \"\"" Feb 02 07:45:03 crc kubenswrapper[4842]: I0202 07:45:03.578429 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn" event={"ID":"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6","Type":"ContainerDied","Data":"85f47b9014f6d0f4127be20e8d5d0f4d7db7ebb54a0a6452db42303cd38497a9"} Feb 02 07:45:03 crc kubenswrapper[4842]: I0202 07:45:03.578781 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85f47b9014f6d0f4127be20e8d5d0f4d7db7ebb54a0a6452db42303cd38497a9" Feb 02 07:45:03 crc kubenswrapper[4842]: I0202 07:45:03.578833 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn" Feb 02 07:45:04 crc kubenswrapper[4842]: I0202 07:45:04.064883 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn"] Feb 02 07:45:04 crc kubenswrapper[4842]: I0202 07:45:04.069762 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn"] Feb 02 07:45:05 crc kubenswrapper[4842]: I0202 07:45:05.441844 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da36ad95-63f3-4cfb-8da7-96b730ccc79b" path="/var/lib/kubelet/pods/da36ad95-63f3-4cfb-8da7-96b730ccc79b/volumes" Feb 02 07:45:12 crc kubenswrapper[4842]: I0202 07:45:12.146517 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:45:12 crc kubenswrapper[4842]: I0202 07:45:12.147469 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:45:12 crc kubenswrapper[4842]: I0202 07:45:12.147561 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 07:45:12 crc kubenswrapper[4842]: I0202 07:45:12.148731 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 07:45:12 crc kubenswrapper[4842]: I0202 07:45:12.148866 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" gracePeriod=600 Feb 02 07:45:12 crc kubenswrapper[4842]: E0202 07:45:12.280387 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:45:12 crc kubenswrapper[4842]: I0202 07:45:12.670835 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" exitCode=0 Feb 02 07:45:12 crc kubenswrapper[4842]: I0202 07:45:12.670861 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a"} Feb 02 07:45:12 crc kubenswrapper[4842]: I0202 07:45:12.670904 4842 scope.go:117] "RemoveContainer" containerID="d04892349eecb502e1841b1180408fe7aa97060cc4ee71a56829833e1ef84e6d" Feb 02 07:45:12 crc kubenswrapper[4842]: I0202 07:45:12.671807 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:45:12 crc kubenswrapper[4842]: E0202 07:45:12.672061 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:45:21 crc kubenswrapper[4842]: I0202 07:45:21.956568 4842 scope.go:117] "RemoveContainer" containerID="dce0962765d9bf38cd06dbb96cb12282f1586c08a47e1dfbc418a62406ef2e49" Feb 02 07:45:26 crc kubenswrapper[4842]: I0202 07:45:26.433688 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:45:26 crc kubenswrapper[4842]: E0202 07:45:26.436773 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:45:37 crc kubenswrapper[4842]: I0202 07:45:37.433492 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:45:37 crc kubenswrapper[4842]: E0202 07:45:37.434672 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:45:48 crc kubenswrapper[4842]: I0202 07:45:48.434244 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:45:48 crc kubenswrapper[4842]: E0202 07:45:48.435341 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:45:59 crc kubenswrapper[4842]: I0202 07:45:59.433874 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:45:59 crc kubenswrapper[4842]: E0202 07:45:59.434623 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:46:12 crc kubenswrapper[4842]: I0202 07:46:12.433874 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:46:12 crc kubenswrapper[4842]: E0202 07:46:12.434928 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:46:26 crc kubenswrapper[4842]: I0202 07:46:26.433688 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:46:26 crc kubenswrapper[4842]: E0202 07:46:26.434487 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:46:41 crc kubenswrapper[4842]: I0202 07:46:41.434198 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:46:41 crc kubenswrapper[4842]: E0202 07:46:41.435939 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:46:55 crc kubenswrapper[4842]: I0202 07:46:55.441682 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:46:55 crc kubenswrapper[4842]: E0202 07:46:55.442873 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:47:08 crc kubenswrapper[4842]: I0202 07:47:08.434341 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:47:08 crc kubenswrapper[4842]: E0202 07:47:08.435535 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:47:23 crc kubenswrapper[4842]: I0202 07:47:23.435205 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:47:23 crc kubenswrapper[4842]: E0202 07:47:23.436135 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:47:37 crc kubenswrapper[4842]: I0202 07:47:37.433922 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:47:37 crc kubenswrapper[4842]: E0202 07:47:37.435036 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:47:50 crc kubenswrapper[4842]: I0202 07:47:50.434668 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:47:50 crc kubenswrapper[4842]: E0202 07:47:50.435855 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:48:02 crc kubenswrapper[4842]: I0202 07:48:02.434181 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:48:02 crc kubenswrapper[4842]: E0202 07:48:02.435263 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:48:14 crc kubenswrapper[4842]: I0202 07:48:14.433421 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:48:14 crc kubenswrapper[4842]: E0202 07:48:14.434133 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:48:27 crc kubenswrapper[4842]: I0202 07:48:27.434138 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:48:27 crc kubenswrapper[4842]: E0202 07:48:27.435358 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:48:31 crc kubenswrapper[4842]: I0202 07:48:31.084345 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7hzjr"] Feb 02 07:48:31 crc kubenswrapper[4842]: E0202 07:48:31.085459 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6" containerName="collect-profiles" Feb 02 07:48:31 crc kubenswrapper[4842]: I0202 07:48:31.085492 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6" containerName="collect-profiles" Feb 02 07:48:31 crc kubenswrapper[4842]: I0202 07:48:31.085828 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6" containerName="collect-profiles" Feb 02 07:48:31 crc kubenswrapper[4842]: I0202 07:48:31.088327 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7hzjr" Feb 02 07:48:31 crc kubenswrapper[4842]: I0202 07:48:31.098579 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7hzjr"] Feb 02 07:48:31 crc kubenswrapper[4842]: I0202 07:48:31.141080 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/940dd57b-92a3-4e95-b3b4-5df0efe013b1-catalog-content\") pod \"certified-operators-7hzjr\" (UID: \"940dd57b-92a3-4e95-b3b4-5df0efe013b1\") " pod="openshift-marketplace/certified-operators-7hzjr" Feb 02 07:48:31 crc kubenswrapper[4842]: I0202 07:48:31.141427 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwbwt\" (UniqueName: \"kubernetes.io/projected/940dd57b-92a3-4e95-b3b4-5df0efe013b1-kube-api-access-lwbwt\") pod \"certified-operators-7hzjr\" (UID: \"940dd57b-92a3-4e95-b3b4-5df0efe013b1\") " pod="openshift-marketplace/certified-operators-7hzjr" Feb 02 07:48:31 crc kubenswrapper[4842]: I0202 07:48:31.141584 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/940dd57b-92a3-4e95-b3b4-5df0efe013b1-utilities\") pod \"certified-operators-7hzjr\" (UID: \"940dd57b-92a3-4e95-b3b4-5df0efe013b1\") " pod="openshift-marketplace/certified-operators-7hzjr" Feb 02 07:48:31 crc kubenswrapper[4842]: I0202 07:48:31.242866 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/940dd57b-92a3-4e95-b3b4-5df0efe013b1-catalog-content\") pod \"certified-operators-7hzjr\" (UID: \"940dd57b-92a3-4e95-b3b4-5df0efe013b1\") " pod="openshift-marketplace/certified-operators-7hzjr" Feb 02 07:48:31 crc kubenswrapper[4842]: I0202 07:48:31.243177 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwbwt\" (UniqueName: \"kubernetes.io/projected/940dd57b-92a3-4e95-b3b4-5df0efe013b1-kube-api-access-lwbwt\") pod \"certified-operators-7hzjr\" (UID: \"940dd57b-92a3-4e95-b3b4-5df0efe013b1\") " pod="openshift-marketplace/certified-operators-7hzjr" Feb 02 07:48:31 crc kubenswrapper[4842]: I0202 07:48:31.243318 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/940dd57b-92a3-4e95-b3b4-5df0efe013b1-utilities\") pod \"certified-operators-7hzjr\" (UID: \"940dd57b-92a3-4e95-b3b4-5df0efe013b1\") " pod="openshift-marketplace/certified-operators-7hzjr" Feb 02 07:48:31 crc kubenswrapper[4842]: I0202 07:48:31.243441 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/940dd57b-92a3-4e95-b3b4-5df0efe013b1-catalog-content\") pod \"certified-operators-7hzjr\" (UID: \"940dd57b-92a3-4e95-b3b4-5df0efe013b1\") " pod="openshift-marketplace/certified-operators-7hzjr" Feb 02 07:48:31 crc kubenswrapper[4842]: I0202 07:48:31.243833 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/940dd57b-92a3-4e95-b3b4-5df0efe013b1-utilities\") pod \"certified-operators-7hzjr\" (UID: \"940dd57b-92a3-4e95-b3b4-5df0efe013b1\") " pod="openshift-marketplace/certified-operators-7hzjr" Feb 02 07:48:31 crc kubenswrapper[4842]: I0202 07:48:31.263368 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwbwt\" (UniqueName: \"kubernetes.io/projected/940dd57b-92a3-4e95-b3b4-5df0efe013b1-kube-api-access-lwbwt\") pod \"certified-operators-7hzjr\" (UID: \"940dd57b-92a3-4e95-b3b4-5df0efe013b1\") " pod="openshift-marketplace/certified-operators-7hzjr" Feb 02 07:48:31 crc kubenswrapper[4842]: I0202 07:48:31.470869 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7hzjr" Feb 02 07:48:31 crc kubenswrapper[4842]: I0202 07:48:31.925182 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7hzjr"] Feb 02 07:48:32 crc kubenswrapper[4842]: I0202 07:48:32.647418 4842 generic.go:334] "Generic (PLEG): container finished" podID="940dd57b-92a3-4e95-b3b4-5df0efe013b1" containerID="b75725e4c50215f0635909d4cdaa29f7f6dcb1530244ea888272ca94fe49ea4b" exitCode=0 Feb 02 07:48:32 crc kubenswrapper[4842]: I0202 07:48:32.647479 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7hzjr" event={"ID":"940dd57b-92a3-4e95-b3b4-5df0efe013b1","Type":"ContainerDied","Data":"b75725e4c50215f0635909d4cdaa29f7f6dcb1530244ea888272ca94fe49ea4b"} Feb 02 07:48:32 crc kubenswrapper[4842]: I0202 07:48:32.647519 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7hzjr" event={"ID":"940dd57b-92a3-4e95-b3b4-5df0efe013b1","Type":"ContainerStarted","Data":"3d60c79a8911f95c3847b82d02ccf6ea42ed7ecae12f4e541bb7f8bc932c2f28"} Feb 02 07:48:34 crc kubenswrapper[4842]: I0202 07:48:34.464280 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-279f8"] Feb 02 07:48:34 crc kubenswrapper[4842]: I0202 07:48:34.466159 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:34 crc kubenswrapper[4842]: I0202 07:48:34.473788 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-279f8"] Feb 02 07:48:34 crc kubenswrapper[4842]: I0202 07:48:34.599152 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgz4s\" (UniqueName: \"kubernetes.io/projected/71b86c40-ec89-476e-b4ef-c589af5cfd51-kube-api-access-xgz4s\") pod \"redhat-operators-279f8\" (UID: \"71b86c40-ec89-476e-b4ef-c589af5cfd51\") " pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:34 crc kubenswrapper[4842]: I0202 07:48:34.599209 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71b86c40-ec89-476e-b4ef-c589af5cfd51-catalog-content\") pod \"redhat-operators-279f8\" (UID: \"71b86c40-ec89-476e-b4ef-c589af5cfd51\") " pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:34 crc kubenswrapper[4842]: I0202 07:48:34.599395 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71b86c40-ec89-476e-b4ef-c589af5cfd51-utilities\") pod \"redhat-operators-279f8\" (UID: \"71b86c40-ec89-476e-b4ef-c589af5cfd51\") " pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:34 crc kubenswrapper[4842]: I0202 07:48:34.701159 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgz4s\" (UniqueName: \"kubernetes.io/projected/71b86c40-ec89-476e-b4ef-c589af5cfd51-kube-api-access-xgz4s\") pod \"redhat-operators-279f8\" (UID: \"71b86c40-ec89-476e-b4ef-c589af5cfd51\") " pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:34 crc kubenswrapper[4842]: I0202 07:48:34.701205 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71b86c40-ec89-476e-b4ef-c589af5cfd51-catalog-content\") pod \"redhat-operators-279f8\" (UID: \"71b86c40-ec89-476e-b4ef-c589af5cfd51\") " pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:34 crc kubenswrapper[4842]: I0202 07:48:34.701276 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71b86c40-ec89-476e-b4ef-c589af5cfd51-utilities\") pod \"redhat-operators-279f8\" (UID: \"71b86c40-ec89-476e-b4ef-c589af5cfd51\") " pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:34 crc kubenswrapper[4842]: I0202 07:48:34.701805 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71b86c40-ec89-476e-b4ef-c589af5cfd51-utilities\") pod \"redhat-operators-279f8\" (UID: \"71b86c40-ec89-476e-b4ef-c589af5cfd51\") " pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:34 crc kubenswrapper[4842]: I0202 07:48:34.702136 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71b86c40-ec89-476e-b4ef-c589af5cfd51-catalog-content\") pod \"redhat-operators-279f8\" (UID: \"71b86c40-ec89-476e-b4ef-c589af5cfd51\") " pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:34 crc kubenswrapper[4842]: I0202 07:48:34.721319 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgz4s\" (UniqueName: \"kubernetes.io/projected/71b86c40-ec89-476e-b4ef-c589af5cfd51-kube-api-access-xgz4s\") pod \"redhat-operators-279f8\" (UID: \"71b86c40-ec89-476e-b4ef-c589af5cfd51\") " pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:34 crc kubenswrapper[4842]: I0202 07:48:34.795045 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:35 crc kubenswrapper[4842]: I0202 07:48:35.039315 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-279f8"] Feb 02 07:48:35 crc kubenswrapper[4842]: W0202 07:48:35.049595 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71b86c40_ec89_476e_b4ef_c589af5cfd51.slice/crio-1ea6438bcf564fdba3229da5d634c0316c7af7e2b4ca957ebfdb03c355e56e96 WatchSource:0}: Error finding container 1ea6438bcf564fdba3229da5d634c0316c7af7e2b4ca957ebfdb03c355e56e96: Status 404 returned error can't find the container with id 1ea6438bcf564fdba3229da5d634c0316c7af7e2b4ca957ebfdb03c355e56e96 Feb 02 07:48:35 crc kubenswrapper[4842]: I0202 07:48:35.671102 4842 generic.go:334] "Generic (PLEG): container finished" podID="71b86c40-ec89-476e-b4ef-c589af5cfd51" containerID="fa37a0a036b25e8309cefa8f2c531f3df4eb62c16702b62762d98367686100e2" exitCode=0 Feb 02 07:48:35 crc kubenswrapper[4842]: I0202 07:48:35.671226 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-279f8" event={"ID":"71b86c40-ec89-476e-b4ef-c589af5cfd51","Type":"ContainerDied","Data":"fa37a0a036b25e8309cefa8f2c531f3df4eb62c16702b62762d98367686100e2"} Feb 02 07:48:35 crc kubenswrapper[4842]: I0202 07:48:35.672366 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-279f8" event={"ID":"71b86c40-ec89-476e-b4ef-c589af5cfd51","Type":"ContainerStarted","Data":"1ea6438bcf564fdba3229da5d634c0316c7af7e2b4ca957ebfdb03c355e56e96"} Feb 02 07:48:37 crc kubenswrapper[4842]: I0202 07:48:37.691734 4842 generic.go:334] "Generic (PLEG): container finished" podID="940dd57b-92a3-4e95-b3b4-5df0efe013b1" containerID="f0a4a91c57e0912a079986a777057c27130537abf36090ea266336979a3fa017" exitCode=0 Feb 02 07:48:37 crc kubenswrapper[4842]: I0202 07:48:37.691999 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7hzjr" event={"ID":"940dd57b-92a3-4e95-b3b4-5df0efe013b1","Type":"ContainerDied","Data":"f0a4a91c57e0912a079986a777057c27130537abf36090ea266336979a3fa017"} Feb 02 07:48:37 crc kubenswrapper[4842]: I0202 07:48:37.699285 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-279f8" event={"ID":"71b86c40-ec89-476e-b4ef-c589af5cfd51","Type":"ContainerStarted","Data":"4f536cbb24c94518e58fe5cab2f2d67610e926117a2fdbd989f3e67907fb7490"} Feb 02 07:48:38 crc kubenswrapper[4842]: I0202 07:48:38.433966 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:48:38 crc kubenswrapper[4842]: E0202 07:48:38.434406 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:48:38 crc kubenswrapper[4842]: I0202 07:48:38.710305 4842 generic.go:334] "Generic (PLEG): container finished" podID="71b86c40-ec89-476e-b4ef-c589af5cfd51" containerID="4f536cbb24c94518e58fe5cab2f2d67610e926117a2fdbd989f3e67907fb7490" exitCode=0 Feb 02 07:48:38 crc kubenswrapper[4842]: I0202 07:48:38.710444 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-279f8" event={"ID":"71b86c40-ec89-476e-b4ef-c589af5cfd51","Type":"ContainerDied","Data":"4f536cbb24c94518e58fe5cab2f2d67610e926117a2fdbd989f3e67907fb7490"} Feb 02 07:48:38 crc kubenswrapper[4842]: I0202 07:48:38.713997 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7hzjr" event={"ID":"940dd57b-92a3-4e95-b3b4-5df0efe013b1","Type":"ContainerStarted","Data":"176976b5333699da049b24ff21866f294c1ef9f0c8416775fd13db72d7127058"} Feb 02 07:48:38 crc kubenswrapper[4842]: I0202 07:48:38.792444 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7hzjr" podStartSLOduration=2.31381694 podStartE2EDuration="7.792416044s" podCreationTimestamp="2026-02-02 07:48:31 +0000 UTC" firstStartedPulling="2026-02-02 07:48:32.65014401 +0000 UTC m=+3738.027411952" lastFinishedPulling="2026-02-02 07:48:38.128743134 +0000 UTC m=+3743.506011056" observedRunningTime="2026-02-02 07:48:38.779004821 +0000 UTC m=+3744.156272773" watchObservedRunningTime="2026-02-02 07:48:38.792416044 +0000 UTC m=+3744.169683986" Feb 02 07:48:39 crc kubenswrapper[4842]: I0202 07:48:39.725786 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-279f8" event={"ID":"71b86c40-ec89-476e-b4ef-c589af5cfd51","Type":"ContainerStarted","Data":"6ff94907d807a7db02aa2e58925b5604aae68a69af14643e87b3e18b54a027ac"} Feb 02 07:48:39 crc kubenswrapper[4842]: I0202 07:48:39.756540 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-279f8" podStartSLOduration=2.322324935 podStartE2EDuration="5.756507845s" podCreationTimestamp="2026-02-02 07:48:34 +0000 UTC" firstStartedPulling="2026-02-02 07:48:35.673276097 +0000 UTC m=+3741.050544009" lastFinishedPulling="2026-02-02 07:48:39.107459007 +0000 UTC m=+3744.484726919" observedRunningTime="2026-02-02 07:48:39.750629069 +0000 UTC m=+3745.127896991" watchObservedRunningTime="2026-02-02 07:48:39.756507845 +0000 UTC m=+3745.133775787" Feb 02 07:48:41 crc kubenswrapper[4842]: I0202 07:48:41.471845 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7hzjr" Feb 02 07:48:41 crc kubenswrapper[4842]: I0202 07:48:41.471890 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7hzjr" Feb 02 07:48:41 crc kubenswrapper[4842]: I0202 07:48:41.525337 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7hzjr" Feb 02 07:48:44 crc kubenswrapper[4842]: I0202 07:48:44.795191 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:44 crc kubenswrapper[4842]: I0202 07:48:44.795632 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:45 crc kubenswrapper[4842]: I0202 07:48:45.866178 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-279f8" podUID="71b86c40-ec89-476e-b4ef-c589af5cfd51" containerName="registry-server" probeResult="failure" output=< Feb 02 07:48:45 crc kubenswrapper[4842]: timeout: failed to connect service ":50051" within 1s Feb 02 07:48:45 crc kubenswrapper[4842]: > Feb 02 07:48:49 crc kubenswrapper[4842]: I0202 07:48:49.434452 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:48:49 crc kubenswrapper[4842]: E0202 07:48:49.435503 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:48:51 crc kubenswrapper[4842]: I0202 07:48:51.523557 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7hzjr" Feb 02 07:48:51 crc kubenswrapper[4842]: I0202 07:48:51.605613 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7hzjr"] Feb 02 07:48:51 crc kubenswrapper[4842]: I0202 07:48:51.643782 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cbwzh"] Feb 02 07:48:51 crc kubenswrapper[4842]: I0202 07:48:51.644033 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cbwzh" podUID="9969706e-304c-490a-b15d-7d0bfc99261c" containerName="registry-server" containerID="cri-o://e64acd0481969dd97f8f6ecb1ab6976f73e44f1ae7f1c189557824f80b337968" gracePeriod=2 Feb 02 07:48:51 crc kubenswrapper[4842]: I0202 07:48:51.841546 4842 generic.go:334] "Generic (PLEG): container finished" podID="9969706e-304c-490a-b15d-7d0bfc99261c" containerID="e64acd0481969dd97f8f6ecb1ab6976f73e44f1ae7f1c189557824f80b337968" exitCode=0 Feb 02 07:48:51 crc kubenswrapper[4842]: I0202 07:48:51.842292 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbwzh" event={"ID":"9969706e-304c-490a-b15d-7d0bfc99261c","Type":"ContainerDied","Data":"e64acd0481969dd97f8f6ecb1ab6976f73e44f1ae7f1c189557824f80b337968"} Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.040725 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.087567 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9969706e-304c-490a-b15d-7d0bfc99261c-catalog-content\") pod \"9969706e-304c-490a-b15d-7d0bfc99261c\" (UID: \"9969706e-304c-490a-b15d-7d0bfc99261c\") " Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.087686 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvsxr\" (UniqueName: \"kubernetes.io/projected/9969706e-304c-490a-b15d-7d0bfc99261c-kube-api-access-tvsxr\") pod \"9969706e-304c-490a-b15d-7d0bfc99261c\" (UID: \"9969706e-304c-490a-b15d-7d0bfc99261c\") " Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.087780 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9969706e-304c-490a-b15d-7d0bfc99261c-utilities\") pod \"9969706e-304c-490a-b15d-7d0bfc99261c\" (UID: \"9969706e-304c-490a-b15d-7d0bfc99261c\") " Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.088511 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9969706e-304c-490a-b15d-7d0bfc99261c-utilities" (OuterVolumeSpecName: "utilities") pod "9969706e-304c-490a-b15d-7d0bfc99261c" (UID: "9969706e-304c-490a-b15d-7d0bfc99261c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.093317 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9969706e-304c-490a-b15d-7d0bfc99261c-kube-api-access-tvsxr" (OuterVolumeSpecName: "kube-api-access-tvsxr") pod "9969706e-304c-490a-b15d-7d0bfc99261c" (UID: "9969706e-304c-490a-b15d-7d0bfc99261c"). InnerVolumeSpecName "kube-api-access-tvsxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.157027 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9969706e-304c-490a-b15d-7d0bfc99261c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9969706e-304c-490a-b15d-7d0bfc99261c" (UID: "9969706e-304c-490a-b15d-7d0bfc99261c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.189472 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvsxr\" (UniqueName: \"kubernetes.io/projected/9969706e-304c-490a-b15d-7d0bfc99261c-kube-api-access-tvsxr\") on node \"crc\" DevicePath \"\"" Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.189503 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9969706e-304c-490a-b15d-7d0bfc99261c-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.189516 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9969706e-304c-490a-b15d-7d0bfc99261c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.850769 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbwzh" event={"ID":"9969706e-304c-490a-b15d-7d0bfc99261c","Type":"ContainerDied","Data":"87da024578fe003edad40db056fe8ec4f30280deba8415eb825b3aeb82ca3997"} Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.850833 4842 scope.go:117] "RemoveContainer" containerID="e64acd0481969dd97f8f6ecb1ab6976f73e44f1ae7f1c189557824f80b337968" Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.850832 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.875761 4842 scope.go:117] "RemoveContainer" containerID="308b61160ba5e467d88f1ac70bd85a0adb7d7b33d6c1eb5a0233036f6970dc7b" Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.889466 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cbwzh"] Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.894766 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cbwzh"] Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.895956 4842 scope.go:117] "RemoveContainer" containerID="cdc5b57eaa471b1df4736cdcd50fb5f9ddf54fbd99f33734d0e692fc9f77a97f" Feb 02 07:48:53 crc kubenswrapper[4842]: I0202 07:48:53.448632 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9969706e-304c-490a-b15d-7d0bfc99261c" path="/var/lib/kubelet/pods/9969706e-304c-490a-b15d-7d0bfc99261c/volumes" Feb 02 07:48:54 crc kubenswrapper[4842]: I0202 07:48:54.874729 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:54 crc kubenswrapper[4842]: I0202 07:48:54.947410 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:55 crc kubenswrapper[4842]: I0202 07:48:55.967102 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-279f8"] Feb 02 07:48:56 crc kubenswrapper[4842]: I0202 07:48:56.888578 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-279f8" podUID="71b86c40-ec89-476e-b4ef-c589af5cfd51" containerName="registry-server" containerID="cri-o://6ff94907d807a7db02aa2e58925b5604aae68a69af14643e87b3e18b54a027ac" gracePeriod=2 Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.284472 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.385669 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71b86c40-ec89-476e-b4ef-c589af5cfd51-catalog-content\") pod \"71b86c40-ec89-476e-b4ef-c589af5cfd51\" (UID: \"71b86c40-ec89-476e-b4ef-c589af5cfd51\") " Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.386004 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71b86c40-ec89-476e-b4ef-c589af5cfd51-utilities\") pod \"71b86c40-ec89-476e-b4ef-c589af5cfd51\" (UID: \"71b86c40-ec89-476e-b4ef-c589af5cfd51\") " Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.386098 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgz4s\" (UniqueName: \"kubernetes.io/projected/71b86c40-ec89-476e-b4ef-c589af5cfd51-kube-api-access-xgz4s\") pod \"71b86c40-ec89-476e-b4ef-c589af5cfd51\" (UID: \"71b86c40-ec89-476e-b4ef-c589af5cfd51\") " Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.388345 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71b86c40-ec89-476e-b4ef-c589af5cfd51-utilities" (OuterVolumeSpecName: "utilities") pod "71b86c40-ec89-476e-b4ef-c589af5cfd51" (UID: "71b86c40-ec89-476e-b4ef-c589af5cfd51"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.401973 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71b86c40-ec89-476e-b4ef-c589af5cfd51-kube-api-access-xgz4s" (OuterVolumeSpecName: "kube-api-access-xgz4s") pod "71b86c40-ec89-476e-b4ef-c589af5cfd51" (UID: "71b86c40-ec89-476e-b4ef-c589af5cfd51"). InnerVolumeSpecName "kube-api-access-xgz4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.487840 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71b86c40-ec89-476e-b4ef-c589af5cfd51-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.487886 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgz4s\" (UniqueName: \"kubernetes.io/projected/71b86c40-ec89-476e-b4ef-c589af5cfd51-kube-api-access-xgz4s\") on node \"crc\" DevicePath \"\"" Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.536107 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71b86c40-ec89-476e-b4ef-c589af5cfd51-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71b86c40-ec89-476e-b4ef-c589af5cfd51" (UID: "71b86c40-ec89-476e-b4ef-c589af5cfd51"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.590306 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71b86c40-ec89-476e-b4ef-c589af5cfd51-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.902972 4842 generic.go:334] "Generic (PLEG): container finished" podID="71b86c40-ec89-476e-b4ef-c589af5cfd51" containerID="6ff94907d807a7db02aa2e58925b5604aae68a69af14643e87b3e18b54a027ac" exitCode=0 Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.903037 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-279f8" event={"ID":"71b86c40-ec89-476e-b4ef-c589af5cfd51","Type":"ContainerDied","Data":"6ff94907d807a7db02aa2e58925b5604aae68a69af14643e87b3e18b54a027ac"} Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.903054 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.903084 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-279f8" event={"ID":"71b86c40-ec89-476e-b4ef-c589af5cfd51","Type":"ContainerDied","Data":"1ea6438bcf564fdba3229da5d634c0316c7af7e2b4ca957ebfdb03c355e56e96"} Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.903116 4842 scope.go:117] "RemoveContainer" containerID="6ff94907d807a7db02aa2e58925b5604aae68a69af14643e87b3e18b54a027ac" Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.931497 4842 scope.go:117] "RemoveContainer" containerID="4f536cbb24c94518e58fe5cab2f2d67610e926117a2fdbd989f3e67907fb7490" Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.962934 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-279f8"] Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.976960 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-279f8"] Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.988755 4842 scope.go:117] "RemoveContainer" containerID="fa37a0a036b25e8309cefa8f2c531f3df4eb62c16702b62762d98367686100e2" Feb 02 07:48:58 crc kubenswrapper[4842]: I0202 07:48:58.026130 4842 scope.go:117] "RemoveContainer" containerID="6ff94907d807a7db02aa2e58925b5604aae68a69af14643e87b3e18b54a027ac" Feb 02 07:48:58 crc kubenswrapper[4842]: E0202 07:48:58.026753 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ff94907d807a7db02aa2e58925b5604aae68a69af14643e87b3e18b54a027ac\": container with ID starting with 6ff94907d807a7db02aa2e58925b5604aae68a69af14643e87b3e18b54a027ac not found: ID does not exist" containerID="6ff94907d807a7db02aa2e58925b5604aae68a69af14643e87b3e18b54a027ac" Feb 02 07:48:58 crc kubenswrapper[4842]: I0202 07:48:58.026804 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ff94907d807a7db02aa2e58925b5604aae68a69af14643e87b3e18b54a027ac"} err="failed to get container status \"6ff94907d807a7db02aa2e58925b5604aae68a69af14643e87b3e18b54a027ac\": rpc error: code = NotFound desc = could not find container \"6ff94907d807a7db02aa2e58925b5604aae68a69af14643e87b3e18b54a027ac\": container with ID starting with 6ff94907d807a7db02aa2e58925b5604aae68a69af14643e87b3e18b54a027ac not found: ID does not exist" Feb 02 07:48:58 crc kubenswrapper[4842]: I0202 07:48:58.026825 4842 scope.go:117] "RemoveContainer" containerID="4f536cbb24c94518e58fe5cab2f2d67610e926117a2fdbd989f3e67907fb7490" Feb 02 07:48:58 crc kubenswrapper[4842]: E0202 07:48:58.027235 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f536cbb24c94518e58fe5cab2f2d67610e926117a2fdbd989f3e67907fb7490\": container with ID starting with 4f536cbb24c94518e58fe5cab2f2d67610e926117a2fdbd989f3e67907fb7490 not found: ID does not exist" containerID="4f536cbb24c94518e58fe5cab2f2d67610e926117a2fdbd989f3e67907fb7490" Feb 02 07:48:58 crc kubenswrapper[4842]: I0202 07:48:58.027258 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f536cbb24c94518e58fe5cab2f2d67610e926117a2fdbd989f3e67907fb7490"} err="failed to get container status \"4f536cbb24c94518e58fe5cab2f2d67610e926117a2fdbd989f3e67907fb7490\": rpc error: code = NotFound desc = could not find container \"4f536cbb24c94518e58fe5cab2f2d67610e926117a2fdbd989f3e67907fb7490\": container with ID starting with 4f536cbb24c94518e58fe5cab2f2d67610e926117a2fdbd989f3e67907fb7490 not found: ID does not exist" Feb 02 07:48:58 crc kubenswrapper[4842]: I0202 07:48:58.027271 4842 scope.go:117] "RemoveContainer" containerID="fa37a0a036b25e8309cefa8f2c531f3df4eb62c16702b62762d98367686100e2" Feb 02 07:48:58 crc kubenswrapper[4842]: E0202 07:48:58.027831 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa37a0a036b25e8309cefa8f2c531f3df4eb62c16702b62762d98367686100e2\": container with ID starting with fa37a0a036b25e8309cefa8f2c531f3df4eb62c16702b62762d98367686100e2 not found: ID does not exist" containerID="fa37a0a036b25e8309cefa8f2c531f3df4eb62c16702b62762d98367686100e2" Feb 02 07:48:58 crc kubenswrapper[4842]: I0202 07:48:58.027884 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa37a0a036b25e8309cefa8f2c531f3df4eb62c16702b62762d98367686100e2"} err="failed to get container status \"fa37a0a036b25e8309cefa8f2c531f3df4eb62c16702b62762d98367686100e2\": rpc error: code = NotFound desc = could not find container \"fa37a0a036b25e8309cefa8f2c531f3df4eb62c16702b62762d98367686100e2\": container with ID starting with fa37a0a036b25e8309cefa8f2c531f3df4eb62c16702b62762d98367686100e2 not found: ID does not exist" Feb 02 07:48:59 crc kubenswrapper[4842]: I0202 07:48:59.450066 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71b86c40-ec89-476e-b4ef-c589af5cfd51" path="/var/lib/kubelet/pods/71b86c40-ec89-476e-b4ef-c589af5cfd51/volumes" Feb 02 07:49:04 crc kubenswrapper[4842]: I0202 07:49:04.434948 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:49:04 crc kubenswrapper[4842]: E0202 07:49:04.437162 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:49:16 crc kubenswrapper[4842]: I0202 07:49:16.434371 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:49:16 crc kubenswrapper[4842]: E0202 07:49:16.435826 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:49:27 crc kubenswrapper[4842]: I0202 07:49:27.433501 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:49:27 crc kubenswrapper[4842]: E0202 07:49:27.434922 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:49:41 crc kubenswrapper[4842]: I0202 07:49:41.435303 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:49:41 crc kubenswrapper[4842]: E0202 07:49:41.436999 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:49:55 crc kubenswrapper[4842]: I0202 07:49:55.441770 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:49:55 crc kubenswrapper[4842]: E0202 07:49:55.442875 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:50:09 crc kubenswrapper[4842]: I0202 07:50:09.433741 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:50:09 crc kubenswrapper[4842]: E0202 07:50:09.434801 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:50:23 crc kubenswrapper[4842]: I0202 07:50:23.434207 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:50:23 crc kubenswrapper[4842]: I0202 07:50:23.892550 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"2a6b1b10d828e24dab9ac38a1a9d09d8e3ce721fcbac4b2dc553e7b889f1a4f2"} Feb 02 07:52:42 crc kubenswrapper[4842]: I0202 07:52:42.146095 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:52:42 crc kubenswrapper[4842]: I0202 07:52:42.146709 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:53:12 crc kubenswrapper[4842]: I0202 07:53:12.146429 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:53:12 crc kubenswrapper[4842]: I0202 07:53:12.146999 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:53:42 crc kubenswrapper[4842]: I0202 07:53:42.145894 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:53:42 crc kubenswrapper[4842]: I0202 07:53:42.146573 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:53:42 crc kubenswrapper[4842]: I0202 07:53:42.146653 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 07:53:42 crc kubenswrapper[4842]: I0202 07:53:42.147598 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2a6b1b10d828e24dab9ac38a1a9d09d8e3ce721fcbac4b2dc553e7b889f1a4f2"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 07:53:42 crc kubenswrapper[4842]: I0202 07:53:42.147731 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://2a6b1b10d828e24dab9ac38a1a9d09d8e3ce721fcbac4b2dc553e7b889f1a4f2" gracePeriod=600 Feb 02 07:53:42 crc kubenswrapper[4842]: I0202 07:53:42.281197 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="2a6b1b10d828e24dab9ac38a1a9d09d8e3ce721fcbac4b2dc553e7b889f1a4f2" exitCode=0 Feb 02 07:53:42 crc kubenswrapper[4842]: I0202 07:53:42.281273 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"2a6b1b10d828e24dab9ac38a1a9d09d8e3ce721fcbac4b2dc553e7b889f1a4f2"} Feb 02 07:53:42 crc kubenswrapper[4842]: I0202 07:53:42.281705 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:53:43 crc kubenswrapper[4842]: I0202 07:53:43.297054 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90"} Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.139968 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vwxbs"] Feb 02 07:55:37 crc kubenswrapper[4842]: E0202 07:55:37.140808 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71b86c40-ec89-476e-b4ef-c589af5cfd51" containerName="extract-content" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.140824 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="71b86c40-ec89-476e-b4ef-c589af5cfd51" containerName="extract-content" Feb 02 07:55:37 crc kubenswrapper[4842]: E0202 07:55:37.140837 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9969706e-304c-490a-b15d-7d0bfc99261c" containerName="extract-content" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.140845 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="9969706e-304c-490a-b15d-7d0bfc99261c" containerName="extract-content" Feb 02 07:55:37 crc kubenswrapper[4842]: E0202 07:55:37.140867 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9969706e-304c-490a-b15d-7d0bfc99261c" containerName="extract-utilities" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.140875 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="9969706e-304c-490a-b15d-7d0bfc99261c" containerName="extract-utilities" Feb 02 07:55:37 crc kubenswrapper[4842]: E0202 07:55:37.140890 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9969706e-304c-490a-b15d-7d0bfc99261c" containerName="registry-server" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.140897 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="9969706e-304c-490a-b15d-7d0bfc99261c" containerName="registry-server" Feb 02 07:55:37 crc kubenswrapper[4842]: E0202 07:55:37.140906 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71b86c40-ec89-476e-b4ef-c589af5cfd51" containerName="extract-utilities" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.140913 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="71b86c40-ec89-476e-b4ef-c589af5cfd51" containerName="extract-utilities" Feb 02 07:55:37 crc kubenswrapper[4842]: E0202 07:55:37.140931 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71b86c40-ec89-476e-b4ef-c589af5cfd51" containerName="registry-server" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.140940 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="71b86c40-ec89-476e-b4ef-c589af5cfd51" containerName="registry-server" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.141100 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="9969706e-304c-490a-b15d-7d0bfc99261c" containerName="registry-server" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.141122 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="71b86c40-ec89-476e-b4ef-c589af5cfd51" containerName="registry-server" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.142375 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.159050 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwxbs"] Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.229936 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxpdw\" (UniqueName: \"kubernetes.io/projected/f8256e28-ef80-4c77-87cf-5c5fa552a61a-kube-api-access-wxpdw\") pod \"redhat-marketplace-vwxbs\" (UID: \"f8256e28-ef80-4c77-87cf-5c5fa552a61a\") " pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.230022 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8256e28-ef80-4c77-87cf-5c5fa552a61a-catalog-content\") pod \"redhat-marketplace-vwxbs\" (UID: \"f8256e28-ef80-4c77-87cf-5c5fa552a61a\") " pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.230148 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8256e28-ef80-4c77-87cf-5c5fa552a61a-utilities\") pod \"redhat-marketplace-vwxbs\" (UID: \"f8256e28-ef80-4c77-87cf-5c5fa552a61a\") " pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.332150 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8256e28-ef80-4c77-87cf-5c5fa552a61a-utilities\") pod \"redhat-marketplace-vwxbs\" (UID: \"f8256e28-ef80-4c77-87cf-5c5fa552a61a\") " pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.332375 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxpdw\" (UniqueName: \"kubernetes.io/projected/f8256e28-ef80-4c77-87cf-5c5fa552a61a-kube-api-access-wxpdw\") pod \"redhat-marketplace-vwxbs\" (UID: \"f8256e28-ef80-4c77-87cf-5c5fa552a61a\") " pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.332520 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8256e28-ef80-4c77-87cf-5c5fa552a61a-catalog-content\") pod \"redhat-marketplace-vwxbs\" (UID: \"f8256e28-ef80-4c77-87cf-5c5fa552a61a\") " pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.332767 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8256e28-ef80-4c77-87cf-5c5fa552a61a-utilities\") pod \"redhat-marketplace-vwxbs\" (UID: \"f8256e28-ef80-4c77-87cf-5c5fa552a61a\") " pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.333370 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8256e28-ef80-4c77-87cf-5c5fa552a61a-catalog-content\") pod \"redhat-marketplace-vwxbs\" (UID: \"f8256e28-ef80-4c77-87cf-5c5fa552a61a\") " pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.376558 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxpdw\" (UniqueName: \"kubernetes.io/projected/f8256e28-ef80-4c77-87cf-5c5fa552a61a-kube-api-access-wxpdw\") pod \"redhat-marketplace-vwxbs\" (UID: \"f8256e28-ef80-4c77-87cf-5c5fa552a61a\") " pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.471561 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:38 crc kubenswrapper[4842]: I0202 07:55:37.998335 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwxbs"] Feb 02 07:55:38 crc kubenswrapper[4842]: I0202 07:55:38.654781 4842 generic.go:334] "Generic (PLEG): container finished" podID="f8256e28-ef80-4c77-87cf-5c5fa552a61a" containerID="48e5229822fde2e70e040c5712810e66420db0bed95168227716d324623ba154" exitCode=0 Feb 02 07:55:38 crc kubenswrapper[4842]: I0202 07:55:38.654911 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwxbs" event={"ID":"f8256e28-ef80-4c77-87cf-5c5fa552a61a","Type":"ContainerDied","Data":"48e5229822fde2e70e040c5712810e66420db0bed95168227716d324623ba154"} Feb 02 07:55:38 crc kubenswrapper[4842]: I0202 07:55:38.655515 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwxbs" event={"ID":"f8256e28-ef80-4c77-87cf-5c5fa552a61a","Type":"ContainerStarted","Data":"4c94cb065d9d3c1220fea3b4b684400a17590e22953a4eb305c511b4386ed940"} Feb 02 07:55:38 crc kubenswrapper[4842]: I0202 07:55:38.659210 4842 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 07:55:40 crc kubenswrapper[4842]: I0202 07:55:40.682181 4842 generic.go:334] "Generic (PLEG): container finished" podID="f8256e28-ef80-4c77-87cf-5c5fa552a61a" containerID="d4aeba5e68412a1ce9bda362ec1609a0710e1b0acbfd4671d89e780140beeceb" exitCode=0 Feb 02 07:55:40 crc kubenswrapper[4842]: I0202 07:55:40.682405 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwxbs" event={"ID":"f8256e28-ef80-4c77-87cf-5c5fa552a61a","Type":"ContainerDied","Data":"d4aeba5e68412a1ce9bda362ec1609a0710e1b0acbfd4671d89e780140beeceb"} Feb 02 07:55:41 crc kubenswrapper[4842]: I0202 07:55:41.697144 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwxbs" event={"ID":"f8256e28-ef80-4c77-87cf-5c5fa552a61a","Type":"ContainerStarted","Data":"5f1da0ede596c76fb9ae9da153f4f8f6264903174eaebaebe6c81d97ed766768"} Feb 02 07:55:41 crc kubenswrapper[4842]: I0202 07:55:41.735570 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vwxbs" podStartSLOduration=2.259022431 podStartE2EDuration="4.735195522s" podCreationTimestamp="2026-02-02 07:55:37 +0000 UTC" firstStartedPulling="2026-02-02 07:55:38.658778784 +0000 UTC m=+4164.036046726" lastFinishedPulling="2026-02-02 07:55:41.134951895 +0000 UTC m=+4166.512219817" observedRunningTime="2026-02-02 07:55:41.730730241 +0000 UTC m=+4167.107998173" watchObservedRunningTime="2026-02-02 07:55:41.735195522 +0000 UTC m=+4167.112463474" Feb 02 07:55:42 crc kubenswrapper[4842]: I0202 07:55:42.146834 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:55:42 crc kubenswrapper[4842]: I0202 07:55:42.147506 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:55:47 crc kubenswrapper[4842]: I0202 07:55:47.476500 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:47 crc kubenswrapper[4842]: I0202 07:55:47.477265 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:47 crc kubenswrapper[4842]: I0202 07:55:47.544559 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:47 crc kubenswrapper[4842]: I0202 07:55:47.872532 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:47 crc kubenswrapper[4842]: I0202 07:55:47.938955 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwxbs"] Feb 02 07:55:49 crc kubenswrapper[4842]: I0202 07:55:49.773338 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vwxbs" podUID="f8256e28-ef80-4c77-87cf-5c5fa552a61a" containerName="registry-server" containerID="cri-o://5f1da0ede596c76fb9ae9da153f4f8f6264903174eaebaebe6c81d97ed766768" gracePeriod=2 Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.257515 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.435461 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8256e28-ef80-4c77-87cf-5c5fa552a61a-utilities\") pod \"f8256e28-ef80-4c77-87cf-5c5fa552a61a\" (UID: \"f8256e28-ef80-4c77-87cf-5c5fa552a61a\") " Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.435536 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8256e28-ef80-4c77-87cf-5c5fa552a61a-catalog-content\") pod \"f8256e28-ef80-4c77-87cf-5c5fa552a61a\" (UID: \"f8256e28-ef80-4c77-87cf-5c5fa552a61a\") " Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.435670 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxpdw\" (UniqueName: \"kubernetes.io/projected/f8256e28-ef80-4c77-87cf-5c5fa552a61a-kube-api-access-wxpdw\") pod \"f8256e28-ef80-4c77-87cf-5c5fa552a61a\" (UID: \"f8256e28-ef80-4c77-87cf-5c5fa552a61a\") " Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.437442 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8256e28-ef80-4c77-87cf-5c5fa552a61a-utilities" (OuterVolumeSpecName: "utilities") pod "f8256e28-ef80-4c77-87cf-5c5fa552a61a" (UID: "f8256e28-ef80-4c77-87cf-5c5fa552a61a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.444254 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8256e28-ef80-4c77-87cf-5c5fa552a61a-kube-api-access-wxpdw" (OuterVolumeSpecName: "kube-api-access-wxpdw") pod "f8256e28-ef80-4c77-87cf-5c5fa552a61a" (UID: "f8256e28-ef80-4c77-87cf-5c5fa552a61a"). InnerVolumeSpecName "kube-api-access-wxpdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.477288 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8256e28-ef80-4c77-87cf-5c5fa552a61a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8256e28-ef80-4c77-87cf-5c5fa552a61a" (UID: "f8256e28-ef80-4c77-87cf-5c5fa552a61a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.537515 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxpdw\" (UniqueName: \"kubernetes.io/projected/f8256e28-ef80-4c77-87cf-5c5fa552a61a-kube-api-access-wxpdw\") on node \"crc\" DevicePath \"\"" Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.537583 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8256e28-ef80-4c77-87cf-5c5fa552a61a-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.537613 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8256e28-ef80-4c77-87cf-5c5fa552a61a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.785763 4842 generic.go:334] "Generic (PLEG): container finished" podID="f8256e28-ef80-4c77-87cf-5c5fa552a61a" containerID="5f1da0ede596c76fb9ae9da153f4f8f6264903174eaebaebe6c81d97ed766768" exitCode=0 Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.785822 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwxbs" event={"ID":"f8256e28-ef80-4c77-87cf-5c5fa552a61a","Type":"ContainerDied","Data":"5f1da0ede596c76fb9ae9da153f4f8f6264903174eaebaebe6c81d97ed766768"} Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.785873 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwxbs" event={"ID":"f8256e28-ef80-4c77-87cf-5c5fa552a61a","Type":"ContainerDied","Data":"4c94cb065d9d3c1220fea3b4b684400a17590e22953a4eb305c511b4386ed940"} Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.785908 4842 scope.go:117] "RemoveContainer" containerID="5f1da0ede596c76fb9ae9da153f4f8f6264903174eaebaebe6c81d97ed766768" Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.789103 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.810863 4842 scope.go:117] "RemoveContainer" containerID="d4aeba5e68412a1ce9bda362ec1609a0710e1b0acbfd4671d89e780140beeceb" Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.843768 4842 scope.go:117] "RemoveContainer" containerID="48e5229822fde2e70e040c5712810e66420db0bed95168227716d324623ba154" Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.852152 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwxbs"] Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.864676 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwxbs"] Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.895449 4842 scope.go:117] "RemoveContainer" containerID="5f1da0ede596c76fb9ae9da153f4f8f6264903174eaebaebe6c81d97ed766768" Feb 02 07:55:50 crc kubenswrapper[4842]: E0202 07:55:50.896309 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f1da0ede596c76fb9ae9da153f4f8f6264903174eaebaebe6c81d97ed766768\": container with ID starting with 5f1da0ede596c76fb9ae9da153f4f8f6264903174eaebaebe6c81d97ed766768 not found: ID does not exist" containerID="5f1da0ede596c76fb9ae9da153f4f8f6264903174eaebaebe6c81d97ed766768" Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.896378 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f1da0ede596c76fb9ae9da153f4f8f6264903174eaebaebe6c81d97ed766768"} err="failed to get container status \"5f1da0ede596c76fb9ae9da153f4f8f6264903174eaebaebe6c81d97ed766768\": rpc error: code = NotFound desc = could not find container \"5f1da0ede596c76fb9ae9da153f4f8f6264903174eaebaebe6c81d97ed766768\": container with ID starting with 5f1da0ede596c76fb9ae9da153f4f8f6264903174eaebaebe6c81d97ed766768 not found: ID does not exist" Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.896423 4842 scope.go:117] "RemoveContainer" containerID="d4aeba5e68412a1ce9bda362ec1609a0710e1b0acbfd4671d89e780140beeceb" Feb 02 07:55:50 crc kubenswrapper[4842]: E0202 07:55:50.897551 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4aeba5e68412a1ce9bda362ec1609a0710e1b0acbfd4671d89e780140beeceb\": container with ID starting with d4aeba5e68412a1ce9bda362ec1609a0710e1b0acbfd4671d89e780140beeceb not found: ID does not exist" containerID="d4aeba5e68412a1ce9bda362ec1609a0710e1b0acbfd4671d89e780140beeceb" Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.897594 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4aeba5e68412a1ce9bda362ec1609a0710e1b0acbfd4671d89e780140beeceb"} err="failed to get container status \"d4aeba5e68412a1ce9bda362ec1609a0710e1b0acbfd4671d89e780140beeceb\": rpc error: code = NotFound desc = could not find container \"d4aeba5e68412a1ce9bda362ec1609a0710e1b0acbfd4671d89e780140beeceb\": container with ID starting with d4aeba5e68412a1ce9bda362ec1609a0710e1b0acbfd4671d89e780140beeceb not found: ID does not exist" Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.897630 4842 scope.go:117] "RemoveContainer" containerID="48e5229822fde2e70e040c5712810e66420db0bed95168227716d324623ba154" Feb 02 07:55:50 crc kubenswrapper[4842]: E0202 07:55:50.898202 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48e5229822fde2e70e040c5712810e66420db0bed95168227716d324623ba154\": container with ID starting with 48e5229822fde2e70e040c5712810e66420db0bed95168227716d324623ba154 not found: ID does not exist" containerID="48e5229822fde2e70e040c5712810e66420db0bed95168227716d324623ba154" Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.898292 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48e5229822fde2e70e040c5712810e66420db0bed95168227716d324623ba154"} err="failed to get container status \"48e5229822fde2e70e040c5712810e66420db0bed95168227716d324623ba154\": rpc error: code = NotFound desc = could not find container \"48e5229822fde2e70e040c5712810e66420db0bed95168227716d324623ba154\": container with ID starting with 48e5229822fde2e70e040c5712810e66420db0bed95168227716d324623ba154 not found: ID does not exist" Feb 02 07:55:51 crc kubenswrapper[4842]: I0202 07:55:51.451786 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8256e28-ef80-4c77-87cf-5c5fa552a61a" path="/var/lib/kubelet/pods/f8256e28-ef80-4c77-87cf-5c5fa552a61a/volumes" Feb 02 07:56:12 crc kubenswrapper[4842]: I0202 07:56:12.147141 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:56:12 crc kubenswrapper[4842]: I0202 07:56:12.148141 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.302851 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5xgng"] Feb 02 07:56:21 crc kubenswrapper[4842]: E0202 07:56:21.303964 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8256e28-ef80-4c77-87cf-5c5fa552a61a" containerName="extract-utilities" Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.303986 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8256e28-ef80-4c77-87cf-5c5fa552a61a" containerName="extract-utilities" Feb 02 07:56:21 crc kubenswrapper[4842]: E0202 07:56:21.304038 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8256e28-ef80-4c77-87cf-5c5fa552a61a" containerName="extract-content" Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.304053 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8256e28-ef80-4c77-87cf-5c5fa552a61a" containerName="extract-content" Feb 02 07:56:21 crc kubenswrapper[4842]: E0202 07:56:21.304078 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8256e28-ef80-4c77-87cf-5c5fa552a61a" containerName="registry-server" Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.304092 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8256e28-ef80-4c77-87cf-5c5fa552a61a" containerName="registry-server" Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.304362 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8256e28-ef80-4c77-87cf-5c5fa552a61a" containerName="registry-server" Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.306259 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.330781 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5xgng"] Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.464737 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/472955b5-64fa-49fb-a6d5-78e8267c9e3a-catalog-content\") pod \"community-operators-5xgng\" (UID: \"472955b5-64fa-49fb-a6d5-78e8267c9e3a\") " pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.464870 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/472955b5-64fa-49fb-a6d5-78e8267c9e3a-utilities\") pod \"community-operators-5xgng\" (UID: \"472955b5-64fa-49fb-a6d5-78e8267c9e3a\") " pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.465199 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8c44\" (UniqueName: \"kubernetes.io/projected/472955b5-64fa-49fb-a6d5-78e8267c9e3a-kube-api-access-j8c44\") pod \"community-operators-5xgng\" (UID: \"472955b5-64fa-49fb-a6d5-78e8267c9e3a\") " pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.566774 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8c44\" (UniqueName: \"kubernetes.io/projected/472955b5-64fa-49fb-a6d5-78e8267c9e3a-kube-api-access-j8c44\") pod \"community-operators-5xgng\" (UID: \"472955b5-64fa-49fb-a6d5-78e8267c9e3a\") " pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.566852 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/472955b5-64fa-49fb-a6d5-78e8267c9e3a-catalog-content\") pod \"community-operators-5xgng\" (UID: \"472955b5-64fa-49fb-a6d5-78e8267c9e3a\") " pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.566912 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/472955b5-64fa-49fb-a6d5-78e8267c9e3a-utilities\") pod \"community-operators-5xgng\" (UID: \"472955b5-64fa-49fb-a6d5-78e8267c9e3a\") " pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.567538 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/472955b5-64fa-49fb-a6d5-78e8267c9e3a-catalog-content\") pod \"community-operators-5xgng\" (UID: \"472955b5-64fa-49fb-a6d5-78e8267c9e3a\") " pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.567555 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/472955b5-64fa-49fb-a6d5-78e8267c9e3a-utilities\") pod \"community-operators-5xgng\" (UID: \"472955b5-64fa-49fb-a6d5-78e8267c9e3a\") " pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.598631 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8c44\" (UniqueName: \"kubernetes.io/projected/472955b5-64fa-49fb-a6d5-78e8267c9e3a-kube-api-access-j8c44\") pod \"community-operators-5xgng\" (UID: \"472955b5-64fa-49fb-a6d5-78e8267c9e3a\") " pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.646342 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:22 crc kubenswrapper[4842]: I0202 07:56:22.122514 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5xgng"] Feb 02 07:56:23 crc kubenswrapper[4842]: I0202 07:56:23.081855 4842 generic.go:334] "Generic (PLEG): container finished" podID="472955b5-64fa-49fb-a6d5-78e8267c9e3a" containerID="9d9bc56eb8d028342795ef504dd3c40f973bc552d00aee75cedfa9d843eaaf02" exitCode=0 Feb 02 07:56:23 crc kubenswrapper[4842]: I0202 07:56:23.081969 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xgng" event={"ID":"472955b5-64fa-49fb-a6d5-78e8267c9e3a","Type":"ContainerDied","Data":"9d9bc56eb8d028342795ef504dd3c40f973bc552d00aee75cedfa9d843eaaf02"} Feb 02 07:56:23 crc kubenswrapper[4842]: I0202 07:56:23.082266 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xgng" event={"ID":"472955b5-64fa-49fb-a6d5-78e8267c9e3a","Type":"ContainerStarted","Data":"4f41e69dad5dabb175e12aad2d4453f9d986e25ee248b8effa51bf29a75e83c8"} Feb 02 07:56:24 crc kubenswrapper[4842]: I0202 07:56:24.095132 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xgng" event={"ID":"472955b5-64fa-49fb-a6d5-78e8267c9e3a","Type":"ContainerStarted","Data":"43a72ad3ce2fb89821ff3c0f385176e873966cc0f12a09722b0c6267fe77041a"} Feb 02 07:56:25 crc kubenswrapper[4842]: I0202 07:56:25.102924 4842 generic.go:334] "Generic (PLEG): container finished" podID="472955b5-64fa-49fb-a6d5-78e8267c9e3a" containerID="43a72ad3ce2fb89821ff3c0f385176e873966cc0f12a09722b0c6267fe77041a" exitCode=0 Feb 02 07:56:25 crc kubenswrapper[4842]: I0202 07:56:25.102959 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xgng" event={"ID":"472955b5-64fa-49fb-a6d5-78e8267c9e3a","Type":"ContainerDied","Data":"43a72ad3ce2fb89821ff3c0f385176e873966cc0f12a09722b0c6267fe77041a"} Feb 02 07:56:26 crc kubenswrapper[4842]: I0202 07:56:26.117363 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xgng" event={"ID":"472955b5-64fa-49fb-a6d5-78e8267c9e3a","Type":"ContainerStarted","Data":"131ad1e8b08629c9d730b42697cc5cb98b699c2dadd8669eccc92eca8b9b2d1b"} Feb 02 07:56:26 crc kubenswrapper[4842]: I0202 07:56:26.152816 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5xgng" podStartSLOduration=2.698815839 podStartE2EDuration="5.152789309s" podCreationTimestamp="2026-02-02 07:56:21 +0000 UTC" firstStartedPulling="2026-02-02 07:56:23.086907423 +0000 UTC m=+4208.464175365" lastFinishedPulling="2026-02-02 07:56:25.540880883 +0000 UTC m=+4210.918148835" observedRunningTime="2026-02-02 07:56:26.142553474 +0000 UTC m=+4211.519821396" watchObservedRunningTime="2026-02-02 07:56:26.152789309 +0000 UTC m=+4211.530057231" Feb 02 07:56:31 crc kubenswrapper[4842]: I0202 07:56:31.646956 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:31 crc kubenswrapper[4842]: I0202 07:56:31.648502 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:31 crc kubenswrapper[4842]: I0202 07:56:31.718885 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:32 crc kubenswrapper[4842]: I0202 07:56:32.249449 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:32 crc kubenswrapper[4842]: I0202 07:56:32.323350 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5xgng"] Feb 02 07:56:34 crc kubenswrapper[4842]: I0202 07:56:34.192087 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5xgng" podUID="472955b5-64fa-49fb-a6d5-78e8267c9e3a" containerName="registry-server" containerID="cri-o://131ad1e8b08629c9d730b42697cc5cb98b699c2dadd8669eccc92eca8b9b2d1b" gracePeriod=2 Feb 02 07:56:34 crc kubenswrapper[4842]: I0202 07:56:34.677857 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:34 crc kubenswrapper[4842]: I0202 07:56:34.783061 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/472955b5-64fa-49fb-a6d5-78e8267c9e3a-utilities\") pod \"472955b5-64fa-49fb-a6d5-78e8267c9e3a\" (UID: \"472955b5-64fa-49fb-a6d5-78e8267c9e3a\") " Feb 02 07:56:34 crc kubenswrapper[4842]: I0202 07:56:34.783405 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/472955b5-64fa-49fb-a6d5-78e8267c9e3a-catalog-content\") pod \"472955b5-64fa-49fb-a6d5-78e8267c9e3a\" (UID: \"472955b5-64fa-49fb-a6d5-78e8267c9e3a\") " Feb 02 07:56:34 crc kubenswrapper[4842]: I0202 07:56:34.783497 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8c44\" (UniqueName: \"kubernetes.io/projected/472955b5-64fa-49fb-a6d5-78e8267c9e3a-kube-api-access-j8c44\") pod \"472955b5-64fa-49fb-a6d5-78e8267c9e3a\" (UID: \"472955b5-64fa-49fb-a6d5-78e8267c9e3a\") " Feb 02 07:56:34 crc kubenswrapper[4842]: I0202 07:56:34.784584 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/472955b5-64fa-49fb-a6d5-78e8267c9e3a-utilities" (OuterVolumeSpecName: "utilities") pod "472955b5-64fa-49fb-a6d5-78e8267c9e3a" (UID: "472955b5-64fa-49fb-a6d5-78e8267c9e3a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:56:34 crc kubenswrapper[4842]: I0202 07:56:34.791976 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/472955b5-64fa-49fb-a6d5-78e8267c9e3a-kube-api-access-j8c44" (OuterVolumeSpecName: "kube-api-access-j8c44") pod "472955b5-64fa-49fb-a6d5-78e8267c9e3a" (UID: "472955b5-64fa-49fb-a6d5-78e8267c9e3a"). InnerVolumeSpecName "kube-api-access-j8c44". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:56:34 crc kubenswrapper[4842]: I0202 07:56:34.865502 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/472955b5-64fa-49fb-a6d5-78e8267c9e3a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "472955b5-64fa-49fb-a6d5-78e8267c9e3a" (UID: "472955b5-64fa-49fb-a6d5-78e8267c9e3a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:56:34 crc kubenswrapper[4842]: I0202 07:56:34.884781 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/472955b5-64fa-49fb-a6d5-78e8267c9e3a-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:56:34 crc kubenswrapper[4842]: I0202 07:56:34.884812 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/472955b5-64fa-49fb-a6d5-78e8267c9e3a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:56:34 crc kubenswrapper[4842]: I0202 07:56:34.884829 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8c44\" (UniqueName: \"kubernetes.io/projected/472955b5-64fa-49fb-a6d5-78e8267c9e3a-kube-api-access-j8c44\") on node \"crc\" DevicePath \"\"" Feb 02 07:56:35 crc kubenswrapper[4842]: I0202 07:56:35.207500 4842 generic.go:334] "Generic (PLEG): container finished" podID="472955b5-64fa-49fb-a6d5-78e8267c9e3a" containerID="131ad1e8b08629c9d730b42697cc5cb98b699c2dadd8669eccc92eca8b9b2d1b" exitCode=0 Feb 02 07:56:35 crc kubenswrapper[4842]: I0202 07:56:35.207622 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:35 crc kubenswrapper[4842]: I0202 07:56:35.207614 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xgng" event={"ID":"472955b5-64fa-49fb-a6d5-78e8267c9e3a","Type":"ContainerDied","Data":"131ad1e8b08629c9d730b42697cc5cb98b699c2dadd8669eccc92eca8b9b2d1b"} Feb 02 07:56:35 crc kubenswrapper[4842]: I0202 07:56:35.207866 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xgng" event={"ID":"472955b5-64fa-49fb-a6d5-78e8267c9e3a","Type":"ContainerDied","Data":"4f41e69dad5dabb175e12aad2d4453f9d986e25ee248b8effa51bf29a75e83c8"} Feb 02 07:56:35 crc kubenswrapper[4842]: I0202 07:56:35.207935 4842 scope.go:117] "RemoveContainer" containerID="131ad1e8b08629c9d730b42697cc5cb98b699c2dadd8669eccc92eca8b9b2d1b" Feb 02 07:56:35 crc kubenswrapper[4842]: I0202 07:56:35.253586 4842 scope.go:117] "RemoveContainer" containerID="43a72ad3ce2fb89821ff3c0f385176e873966cc0f12a09722b0c6267fe77041a" Feb 02 07:56:35 crc kubenswrapper[4842]: I0202 07:56:35.287758 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5xgng"] Feb 02 07:56:35 crc kubenswrapper[4842]: I0202 07:56:35.301769 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5xgng"] Feb 02 07:56:35 crc kubenswrapper[4842]: I0202 07:56:35.316609 4842 scope.go:117] "RemoveContainer" containerID="9d9bc56eb8d028342795ef504dd3c40f973bc552d00aee75cedfa9d843eaaf02" Feb 02 07:56:35 crc kubenswrapper[4842]: I0202 07:56:35.345776 4842 scope.go:117] "RemoveContainer" containerID="131ad1e8b08629c9d730b42697cc5cb98b699c2dadd8669eccc92eca8b9b2d1b" Feb 02 07:56:35 crc kubenswrapper[4842]: E0202 07:56:35.346371 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"131ad1e8b08629c9d730b42697cc5cb98b699c2dadd8669eccc92eca8b9b2d1b\": container with ID starting with 131ad1e8b08629c9d730b42697cc5cb98b699c2dadd8669eccc92eca8b9b2d1b not found: ID does not exist" containerID="131ad1e8b08629c9d730b42697cc5cb98b699c2dadd8669eccc92eca8b9b2d1b" Feb 02 07:56:35 crc kubenswrapper[4842]: I0202 07:56:35.346405 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"131ad1e8b08629c9d730b42697cc5cb98b699c2dadd8669eccc92eca8b9b2d1b"} err="failed to get container status \"131ad1e8b08629c9d730b42697cc5cb98b699c2dadd8669eccc92eca8b9b2d1b\": rpc error: code = NotFound desc = could not find container \"131ad1e8b08629c9d730b42697cc5cb98b699c2dadd8669eccc92eca8b9b2d1b\": container with ID starting with 131ad1e8b08629c9d730b42697cc5cb98b699c2dadd8669eccc92eca8b9b2d1b not found: ID does not exist" Feb 02 07:56:35 crc kubenswrapper[4842]: I0202 07:56:35.346430 4842 scope.go:117] "RemoveContainer" containerID="43a72ad3ce2fb89821ff3c0f385176e873966cc0f12a09722b0c6267fe77041a" Feb 02 07:56:35 crc kubenswrapper[4842]: E0202 07:56:35.346745 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43a72ad3ce2fb89821ff3c0f385176e873966cc0f12a09722b0c6267fe77041a\": container with ID starting with 43a72ad3ce2fb89821ff3c0f385176e873966cc0f12a09722b0c6267fe77041a not found: ID does not exist" containerID="43a72ad3ce2fb89821ff3c0f385176e873966cc0f12a09722b0c6267fe77041a" Feb 02 07:56:35 crc kubenswrapper[4842]: I0202 07:56:35.346792 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43a72ad3ce2fb89821ff3c0f385176e873966cc0f12a09722b0c6267fe77041a"} err="failed to get container status \"43a72ad3ce2fb89821ff3c0f385176e873966cc0f12a09722b0c6267fe77041a\": rpc error: code = NotFound desc = could not find container \"43a72ad3ce2fb89821ff3c0f385176e873966cc0f12a09722b0c6267fe77041a\": container with ID starting with 43a72ad3ce2fb89821ff3c0f385176e873966cc0f12a09722b0c6267fe77041a not found: ID does not exist" Feb 02 07:56:35 crc kubenswrapper[4842]: I0202 07:56:35.346821 4842 scope.go:117] "RemoveContainer" containerID="9d9bc56eb8d028342795ef504dd3c40f973bc552d00aee75cedfa9d843eaaf02" Feb 02 07:56:35 crc kubenswrapper[4842]: E0202 07:56:35.347142 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d9bc56eb8d028342795ef504dd3c40f973bc552d00aee75cedfa9d843eaaf02\": container with ID starting with 9d9bc56eb8d028342795ef504dd3c40f973bc552d00aee75cedfa9d843eaaf02 not found: ID does not exist" containerID="9d9bc56eb8d028342795ef504dd3c40f973bc552d00aee75cedfa9d843eaaf02" Feb 02 07:56:35 crc kubenswrapper[4842]: I0202 07:56:35.347169 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d9bc56eb8d028342795ef504dd3c40f973bc552d00aee75cedfa9d843eaaf02"} err="failed to get container status \"9d9bc56eb8d028342795ef504dd3c40f973bc552d00aee75cedfa9d843eaaf02\": rpc error: code = NotFound desc = could not find container \"9d9bc56eb8d028342795ef504dd3c40f973bc552d00aee75cedfa9d843eaaf02\": container with ID starting with 9d9bc56eb8d028342795ef504dd3c40f973bc552d00aee75cedfa9d843eaaf02 not found: ID does not exist" Feb 02 07:56:35 crc kubenswrapper[4842]: I0202 07:56:35.441743 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="472955b5-64fa-49fb-a6d5-78e8267c9e3a" path="/var/lib/kubelet/pods/472955b5-64fa-49fb-a6d5-78e8267c9e3a/volumes" Feb 02 07:56:42 crc kubenswrapper[4842]: I0202 07:56:42.146468 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:56:42 crc kubenswrapper[4842]: I0202 07:56:42.147295 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:56:42 crc kubenswrapper[4842]: I0202 07:56:42.147370 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 07:56:42 crc kubenswrapper[4842]: I0202 07:56:42.148207 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 07:56:42 crc kubenswrapper[4842]: I0202 07:56:42.148338 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" gracePeriod=600 Feb 02 07:56:42 crc kubenswrapper[4842]: E0202 07:56:42.280302 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:56:42 crc kubenswrapper[4842]: I0202 07:56:42.282380 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" exitCode=0 Feb 02 07:56:42 crc kubenswrapper[4842]: I0202 07:56:42.282497 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90"} Feb 02 07:56:42 crc kubenswrapper[4842]: I0202 07:56:42.282556 4842 scope.go:117] "RemoveContainer" containerID="2a6b1b10d828e24dab9ac38a1a9d09d8e3ce721fcbac4b2dc553e7b889f1a4f2" Feb 02 07:56:43 crc kubenswrapper[4842]: I0202 07:56:43.297927 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 07:56:43 crc kubenswrapper[4842]: E0202 07:56:43.298400 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:56:54 crc kubenswrapper[4842]: I0202 07:56:54.433276 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 07:56:54 crc kubenswrapper[4842]: E0202 07:56:54.434895 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:57:05 crc kubenswrapper[4842]: I0202 07:57:05.448144 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 07:57:05 crc kubenswrapper[4842]: E0202 07:57:05.452475 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:57:16 crc kubenswrapper[4842]: I0202 07:57:16.435213 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 07:57:16 crc kubenswrapper[4842]: E0202 07:57:16.436328 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:57:27 crc kubenswrapper[4842]: I0202 07:57:27.437156 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 07:57:27 crc kubenswrapper[4842]: E0202 07:57:27.438864 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:57:38 crc kubenswrapper[4842]: I0202 07:57:38.433922 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 07:57:38 crc kubenswrapper[4842]: E0202 07:57:38.434904 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:57:52 crc kubenswrapper[4842]: I0202 07:57:52.435294 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 07:57:52 crc kubenswrapper[4842]: E0202 07:57:52.437346 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:58:06 crc kubenswrapper[4842]: I0202 07:58:06.434948 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 07:58:06 crc kubenswrapper[4842]: E0202 07:58:06.436248 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:58:19 crc kubenswrapper[4842]: I0202 07:58:19.433825 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 07:58:19 crc kubenswrapper[4842]: E0202 07:58:19.434609 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:58:30 crc kubenswrapper[4842]: I0202 07:58:30.434021 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 07:58:30 crc kubenswrapper[4842]: E0202 07:58:30.435112 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:58:43 crc kubenswrapper[4842]: I0202 07:58:43.434443 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 07:58:43 crc kubenswrapper[4842]: E0202 07:58:43.435723 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:58:54 crc kubenswrapper[4842]: I0202 07:58:54.433895 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 07:58:54 crc kubenswrapper[4842]: E0202 07:58:54.434538 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:59:05 crc kubenswrapper[4842]: I0202 07:59:05.441927 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 07:59:05 crc kubenswrapper[4842]: E0202 07:59:05.442929 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:59:06 crc kubenswrapper[4842]: I0202 07:59:06.881265 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qhrxp"] Feb 02 07:59:06 crc kubenswrapper[4842]: E0202 07:59:06.881844 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="472955b5-64fa-49fb-a6d5-78e8267c9e3a" containerName="extract-utilities" Feb 02 07:59:06 crc kubenswrapper[4842]: I0202 07:59:06.881868 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="472955b5-64fa-49fb-a6d5-78e8267c9e3a" containerName="extract-utilities" Feb 02 07:59:06 crc kubenswrapper[4842]: E0202 07:59:06.881895 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="472955b5-64fa-49fb-a6d5-78e8267c9e3a" containerName="extract-content" Feb 02 07:59:06 crc kubenswrapper[4842]: I0202 07:59:06.881907 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="472955b5-64fa-49fb-a6d5-78e8267c9e3a" containerName="extract-content" Feb 02 07:59:06 crc kubenswrapper[4842]: E0202 07:59:06.881964 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="472955b5-64fa-49fb-a6d5-78e8267c9e3a" containerName="registry-server" Feb 02 07:59:06 crc kubenswrapper[4842]: I0202 07:59:06.881978 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="472955b5-64fa-49fb-a6d5-78e8267c9e3a" containerName="registry-server" Feb 02 07:59:06 crc kubenswrapper[4842]: I0202 07:59:06.882249 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="472955b5-64fa-49fb-a6d5-78e8267c9e3a" containerName="registry-server" Feb 02 07:59:06 crc kubenswrapper[4842]: I0202 07:59:06.883998 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:06 crc kubenswrapper[4842]: I0202 07:59:06.895470 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qhrxp"] Feb 02 07:59:06 crc kubenswrapper[4842]: I0202 07:59:06.979077 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-catalog-content\") pod \"redhat-operators-qhrxp\" (UID: \"4a0a396e-5aac-478f-82d3-a3f9dff03f2d\") " pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:06 crc kubenswrapper[4842]: I0202 07:59:06.979550 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9qsc\" (UniqueName: \"kubernetes.io/projected/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-kube-api-access-p9qsc\") pod \"redhat-operators-qhrxp\" (UID: \"4a0a396e-5aac-478f-82d3-a3f9dff03f2d\") " pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:06 crc kubenswrapper[4842]: I0202 07:59:06.979587 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-utilities\") pod \"redhat-operators-qhrxp\" (UID: \"4a0a396e-5aac-478f-82d3-a3f9dff03f2d\") " pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:07 crc kubenswrapper[4842]: I0202 07:59:07.080337 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9qsc\" (UniqueName: \"kubernetes.io/projected/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-kube-api-access-p9qsc\") pod \"redhat-operators-qhrxp\" (UID: \"4a0a396e-5aac-478f-82d3-a3f9dff03f2d\") " pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:07 crc kubenswrapper[4842]: I0202 07:59:07.080386 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-utilities\") pod \"redhat-operators-qhrxp\" (UID: \"4a0a396e-5aac-478f-82d3-a3f9dff03f2d\") " pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:07 crc kubenswrapper[4842]: I0202 07:59:07.080453 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-catalog-content\") pod \"redhat-operators-qhrxp\" (UID: \"4a0a396e-5aac-478f-82d3-a3f9dff03f2d\") " pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:07 crc kubenswrapper[4842]: I0202 07:59:07.081028 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-catalog-content\") pod \"redhat-operators-qhrxp\" (UID: \"4a0a396e-5aac-478f-82d3-a3f9dff03f2d\") " pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:07 crc kubenswrapper[4842]: I0202 07:59:07.081125 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-utilities\") pod \"redhat-operators-qhrxp\" (UID: \"4a0a396e-5aac-478f-82d3-a3f9dff03f2d\") " pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:07 crc kubenswrapper[4842]: I0202 07:59:07.113295 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9qsc\" (UniqueName: \"kubernetes.io/projected/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-kube-api-access-p9qsc\") pod \"redhat-operators-qhrxp\" (UID: \"4a0a396e-5aac-478f-82d3-a3f9dff03f2d\") " pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:07 crc kubenswrapper[4842]: I0202 07:59:07.224429 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:07 crc kubenswrapper[4842]: I0202 07:59:07.689645 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qhrxp"] Feb 02 07:59:07 crc kubenswrapper[4842]: W0202 07:59:07.695954 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a0a396e_5aac_478f_82d3_a3f9dff03f2d.slice/crio-ecab244dff79ae4fb4e99c1c45c64404f5155c3d398783d47d57c131a2e686e7 WatchSource:0}: Error finding container ecab244dff79ae4fb4e99c1c45c64404f5155c3d398783d47d57c131a2e686e7: Status 404 returned error can't find the container with id ecab244dff79ae4fb4e99c1c45c64404f5155c3d398783d47d57c131a2e686e7 Feb 02 07:59:08 crc kubenswrapper[4842]: I0202 07:59:08.701863 4842 generic.go:334] "Generic (PLEG): container finished" podID="4a0a396e-5aac-478f-82d3-a3f9dff03f2d" containerID="f072226553d44d7b8a4d8f2e1588dfe5e98540b6f970544e5093aa31ff9d7656" exitCode=0 Feb 02 07:59:08 crc kubenswrapper[4842]: I0202 07:59:08.701938 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhrxp" event={"ID":"4a0a396e-5aac-478f-82d3-a3f9dff03f2d","Type":"ContainerDied","Data":"f072226553d44d7b8a4d8f2e1588dfe5e98540b6f970544e5093aa31ff9d7656"} Feb 02 07:59:08 crc kubenswrapper[4842]: I0202 07:59:08.702366 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhrxp" event={"ID":"4a0a396e-5aac-478f-82d3-a3f9dff03f2d","Type":"ContainerStarted","Data":"ecab244dff79ae4fb4e99c1c45c64404f5155c3d398783d47d57c131a2e686e7"} Feb 02 07:59:10 crc kubenswrapper[4842]: I0202 07:59:10.716268 4842 generic.go:334] "Generic (PLEG): container finished" podID="4a0a396e-5aac-478f-82d3-a3f9dff03f2d" containerID="db7855b3fddda03f38165b244118f000b7127f27af3dee06fc546cdc6c226144" exitCode=0 Feb 02 07:59:10 crc kubenswrapper[4842]: I0202 07:59:10.716340 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhrxp" event={"ID":"4a0a396e-5aac-478f-82d3-a3f9dff03f2d","Type":"ContainerDied","Data":"db7855b3fddda03f38165b244118f000b7127f27af3dee06fc546cdc6c226144"} Feb 02 07:59:11 crc kubenswrapper[4842]: I0202 07:59:11.727351 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhrxp" event={"ID":"4a0a396e-5aac-478f-82d3-a3f9dff03f2d","Type":"ContainerStarted","Data":"784ad1a9d3f4b81827879e1659da3b69e8d95b9a80161a4689fb75887ebd6151"} Feb 02 07:59:11 crc kubenswrapper[4842]: I0202 07:59:11.761864 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qhrxp" podStartSLOduration=3.348451666 podStartE2EDuration="5.761838658s" podCreationTimestamp="2026-02-02 07:59:06 +0000 UTC" firstStartedPulling="2026-02-02 07:59:08.705542471 +0000 UTC m=+4374.082810383" lastFinishedPulling="2026-02-02 07:59:11.118929443 +0000 UTC m=+4376.496197375" observedRunningTime="2026-02-02 07:59:11.752210119 +0000 UTC m=+4377.129478081" watchObservedRunningTime="2026-02-02 07:59:11.761838658 +0000 UTC m=+4377.139106610" Feb 02 07:59:17 crc kubenswrapper[4842]: I0202 07:59:17.225181 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:17 crc kubenswrapper[4842]: I0202 07:59:17.225680 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:18 crc kubenswrapper[4842]: I0202 07:59:18.287662 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qhrxp" podUID="4a0a396e-5aac-478f-82d3-a3f9dff03f2d" containerName="registry-server" probeResult="failure" output=< Feb 02 07:59:18 crc kubenswrapper[4842]: timeout: failed to connect service ":50051" within 1s Feb 02 07:59:18 crc kubenswrapper[4842]: > Feb 02 07:59:18 crc kubenswrapper[4842]: I0202 07:59:18.433279 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 07:59:18 crc kubenswrapper[4842]: E0202 07:59:18.433638 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:59:27 crc kubenswrapper[4842]: I0202 07:59:27.304991 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:27 crc kubenswrapper[4842]: I0202 07:59:27.392046 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:27 crc kubenswrapper[4842]: I0202 07:59:27.557127 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qhrxp"] Feb 02 07:59:28 crc kubenswrapper[4842]: I0202 07:59:28.908540 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qhrxp" podUID="4a0a396e-5aac-478f-82d3-a3f9dff03f2d" containerName="registry-server" containerID="cri-o://784ad1a9d3f4b81827879e1659da3b69e8d95b9a80161a4689fb75887ebd6151" gracePeriod=2 Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.361587 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.542999 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9qsc\" (UniqueName: \"kubernetes.io/projected/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-kube-api-access-p9qsc\") pod \"4a0a396e-5aac-478f-82d3-a3f9dff03f2d\" (UID: \"4a0a396e-5aac-478f-82d3-a3f9dff03f2d\") " Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.543132 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-catalog-content\") pod \"4a0a396e-5aac-478f-82d3-a3f9dff03f2d\" (UID: \"4a0a396e-5aac-478f-82d3-a3f9dff03f2d\") " Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.543194 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-utilities\") pod \"4a0a396e-5aac-478f-82d3-a3f9dff03f2d\" (UID: \"4a0a396e-5aac-478f-82d3-a3f9dff03f2d\") " Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.545054 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-utilities" (OuterVolumeSpecName: "utilities") pod "4a0a396e-5aac-478f-82d3-a3f9dff03f2d" (UID: "4a0a396e-5aac-478f-82d3-a3f9dff03f2d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.547625 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.553527 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-kube-api-access-p9qsc" (OuterVolumeSpecName: "kube-api-access-p9qsc") pod "4a0a396e-5aac-478f-82d3-a3f9dff03f2d" (UID: "4a0a396e-5aac-478f-82d3-a3f9dff03f2d"). InnerVolumeSpecName "kube-api-access-p9qsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.649278 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9qsc\" (UniqueName: \"kubernetes.io/projected/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-kube-api-access-p9qsc\") on node \"crc\" DevicePath \"\"" Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.705630 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4a0a396e-5aac-478f-82d3-a3f9dff03f2d" (UID: "4a0a396e-5aac-478f-82d3-a3f9dff03f2d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.751912 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.923550 4842 generic.go:334] "Generic (PLEG): container finished" podID="4a0a396e-5aac-478f-82d3-a3f9dff03f2d" containerID="784ad1a9d3f4b81827879e1659da3b69e8d95b9a80161a4689fb75887ebd6151" exitCode=0 Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.923627 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhrxp" event={"ID":"4a0a396e-5aac-478f-82d3-a3f9dff03f2d","Type":"ContainerDied","Data":"784ad1a9d3f4b81827879e1659da3b69e8d95b9a80161a4689fb75887ebd6151"} Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.923682 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhrxp" event={"ID":"4a0a396e-5aac-478f-82d3-a3f9dff03f2d","Type":"ContainerDied","Data":"ecab244dff79ae4fb4e99c1c45c64404f5155c3d398783d47d57c131a2e686e7"} Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.923701 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.923714 4842 scope.go:117] "RemoveContainer" containerID="784ad1a9d3f4b81827879e1659da3b69e8d95b9a80161a4689fb75887ebd6151" Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.956601 4842 scope.go:117] "RemoveContainer" containerID="db7855b3fddda03f38165b244118f000b7127f27af3dee06fc546cdc6c226144" Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.984389 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qhrxp"] Feb 02 07:59:30 crc kubenswrapper[4842]: I0202 07:59:30.011672 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qhrxp"] Feb 02 07:59:30 crc kubenswrapper[4842]: I0202 07:59:30.014252 4842 scope.go:117] "RemoveContainer" containerID="f072226553d44d7b8a4d8f2e1588dfe5e98540b6f970544e5093aa31ff9d7656" Feb 02 07:59:30 crc kubenswrapper[4842]: I0202 07:59:30.036521 4842 scope.go:117] "RemoveContainer" containerID="784ad1a9d3f4b81827879e1659da3b69e8d95b9a80161a4689fb75887ebd6151" Feb 02 07:59:30 crc kubenswrapper[4842]: E0202 07:59:30.037010 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"784ad1a9d3f4b81827879e1659da3b69e8d95b9a80161a4689fb75887ebd6151\": container with ID starting with 784ad1a9d3f4b81827879e1659da3b69e8d95b9a80161a4689fb75887ebd6151 not found: ID does not exist" containerID="784ad1a9d3f4b81827879e1659da3b69e8d95b9a80161a4689fb75887ebd6151" Feb 02 07:59:30 crc kubenswrapper[4842]: I0202 07:59:30.037050 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"784ad1a9d3f4b81827879e1659da3b69e8d95b9a80161a4689fb75887ebd6151"} err="failed to get container status \"784ad1a9d3f4b81827879e1659da3b69e8d95b9a80161a4689fb75887ebd6151\": rpc error: code = NotFound desc = could not find container \"784ad1a9d3f4b81827879e1659da3b69e8d95b9a80161a4689fb75887ebd6151\": container with ID starting with 784ad1a9d3f4b81827879e1659da3b69e8d95b9a80161a4689fb75887ebd6151 not found: ID does not exist" Feb 02 07:59:30 crc kubenswrapper[4842]: I0202 07:59:30.037077 4842 scope.go:117] "RemoveContainer" containerID="db7855b3fddda03f38165b244118f000b7127f27af3dee06fc546cdc6c226144" Feb 02 07:59:30 crc kubenswrapper[4842]: E0202 07:59:30.037442 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db7855b3fddda03f38165b244118f000b7127f27af3dee06fc546cdc6c226144\": container with ID starting with db7855b3fddda03f38165b244118f000b7127f27af3dee06fc546cdc6c226144 not found: ID does not exist" containerID="db7855b3fddda03f38165b244118f000b7127f27af3dee06fc546cdc6c226144" Feb 02 07:59:30 crc kubenswrapper[4842]: I0202 07:59:30.037471 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db7855b3fddda03f38165b244118f000b7127f27af3dee06fc546cdc6c226144"} err="failed to get container status \"db7855b3fddda03f38165b244118f000b7127f27af3dee06fc546cdc6c226144\": rpc error: code = NotFound desc = could not find container \"db7855b3fddda03f38165b244118f000b7127f27af3dee06fc546cdc6c226144\": container with ID starting with db7855b3fddda03f38165b244118f000b7127f27af3dee06fc546cdc6c226144 not found: ID does not exist" Feb 02 07:59:30 crc kubenswrapper[4842]: I0202 07:59:30.037489 4842 scope.go:117] "RemoveContainer" containerID="f072226553d44d7b8a4d8f2e1588dfe5e98540b6f970544e5093aa31ff9d7656" Feb 02 07:59:30 crc kubenswrapper[4842]: E0202 07:59:30.038123 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f072226553d44d7b8a4d8f2e1588dfe5e98540b6f970544e5093aa31ff9d7656\": container with ID starting with f072226553d44d7b8a4d8f2e1588dfe5e98540b6f970544e5093aa31ff9d7656 not found: ID does not exist" containerID="f072226553d44d7b8a4d8f2e1588dfe5e98540b6f970544e5093aa31ff9d7656" Feb 02 07:59:30 crc kubenswrapper[4842]: I0202 07:59:30.038170 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f072226553d44d7b8a4d8f2e1588dfe5e98540b6f970544e5093aa31ff9d7656"} err="failed to get container status \"f072226553d44d7b8a4d8f2e1588dfe5e98540b6f970544e5093aa31ff9d7656\": rpc error: code = NotFound desc = could not find container \"f072226553d44d7b8a4d8f2e1588dfe5e98540b6f970544e5093aa31ff9d7656\": container with ID starting with f072226553d44d7b8a4d8f2e1588dfe5e98540b6f970544e5093aa31ff9d7656 not found: ID does not exist" Feb 02 07:59:31 crc kubenswrapper[4842]: I0202 07:59:31.434166 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 07:59:31 crc kubenswrapper[4842]: E0202 07:59:31.435086 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:59:31 crc kubenswrapper[4842]: I0202 07:59:31.449055 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a0a396e-5aac-478f-82d3-a3f9dff03f2d" path="/var/lib/kubelet/pods/4a0a396e-5aac-478f-82d3-a3f9dff03f2d/volumes" Feb 02 07:59:45 crc kubenswrapper[4842]: I0202 07:59:45.441409 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 07:59:45 crc kubenswrapper[4842]: E0202 07:59:45.442362 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.208909 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl"] Feb 02 08:00:00 crc kubenswrapper[4842]: E0202 08:00:00.209737 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a0a396e-5aac-478f-82d3-a3f9dff03f2d" containerName="extract-utilities" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.209750 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a0a396e-5aac-478f-82d3-a3f9dff03f2d" containerName="extract-utilities" Feb 02 08:00:00 crc kubenswrapper[4842]: E0202 08:00:00.209778 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a0a396e-5aac-478f-82d3-a3f9dff03f2d" containerName="extract-content" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.209784 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a0a396e-5aac-478f-82d3-a3f9dff03f2d" containerName="extract-content" Feb 02 08:00:00 crc kubenswrapper[4842]: E0202 08:00:00.209805 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a0a396e-5aac-478f-82d3-a3f9dff03f2d" containerName="registry-server" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.209812 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a0a396e-5aac-478f-82d3-a3f9dff03f2d" containerName="registry-server" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.209944 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a0a396e-5aac-478f-82d3-a3f9dff03f2d" containerName="registry-server" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.210381 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.211980 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.212058 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.223204 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl"] Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.390827 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gtq7\" (UniqueName: \"kubernetes.io/projected/e01be79b-cbb5-4540-9a1c-5d0891ed6399-kube-api-access-9gtq7\") pod \"collect-profiles-29500320-mjgfl\" (UID: \"e01be79b-cbb5-4540-9a1c-5d0891ed6399\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.390890 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e01be79b-cbb5-4540-9a1c-5d0891ed6399-config-volume\") pod \"collect-profiles-29500320-mjgfl\" (UID: \"e01be79b-cbb5-4540-9a1c-5d0891ed6399\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.391046 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e01be79b-cbb5-4540-9a1c-5d0891ed6399-secret-volume\") pod \"collect-profiles-29500320-mjgfl\" (UID: \"e01be79b-cbb5-4540-9a1c-5d0891ed6399\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.433659 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 08:00:00 crc kubenswrapper[4842]: E0202 08:00:00.433927 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.492479 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gtq7\" (UniqueName: \"kubernetes.io/projected/e01be79b-cbb5-4540-9a1c-5d0891ed6399-kube-api-access-9gtq7\") pod \"collect-profiles-29500320-mjgfl\" (UID: \"e01be79b-cbb5-4540-9a1c-5d0891ed6399\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.492782 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e01be79b-cbb5-4540-9a1c-5d0891ed6399-config-volume\") pod \"collect-profiles-29500320-mjgfl\" (UID: \"e01be79b-cbb5-4540-9a1c-5d0891ed6399\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.493038 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e01be79b-cbb5-4540-9a1c-5d0891ed6399-secret-volume\") pod \"collect-profiles-29500320-mjgfl\" (UID: \"e01be79b-cbb5-4540-9a1c-5d0891ed6399\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.493951 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e01be79b-cbb5-4540-9a1c-5d0891ed6399-config-volume\") pod \"collect-profiles-29500320-mjgfl\" (UID: \"e01be79b-cbb5-4540-9a1c-5d0891ed6399\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.503498 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e01be79b-cbb5-4540-9a1c-5d0891ed6399-secret-volume\") pod \"collect-profiles-29500320-mjgfl\" (UID: \"e01be79b-cbb5-4540-9a1c-5d0891ed6399\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.515783 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gtq7\" (UniqueName: \"kubernetes.io/projected/e01be79b-cbb5-4540-9a1c-5d0891ed6399-kube-api-access-9gtq7\") pod \"collect-profiles-29500320-mjgfl\" (UID: \"e01be79b-cbb5-4540-9a1c-5d0891ed6399\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.560792 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl" Feb 02 08:00:01 crc kubenswrapper[4842]: I0202 08:00:01.012013 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl"] Feb 02 08:00:01 crc kubenswrapper[4842]: I0202 08:00:01.223687 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl" event={"ID":"e01be79b-cbb5-4540-9a1c-5d0891ed6399","Type":"ContainerStarted","Data":"40aebe991f98f0098755eeb06cace74a285f48cd45fe5b2462d0a0f5a305f461"} Feb 02 08:00:02 crc kubenswrapper[4842]: I0202 08:00:02.234031 4842 generic.go:334] "Generic (PLEG): container finished" podID="e01be79b-cbb5-4540-9a1c-5d0891ed6399" containerID="53b8081d7a60c7c28d76b97b32f3fec298777e492876306e229be304ee7a402a" exitCode=0 Feb 02 08:00:02 crc kubenswrapper[4842]: I0202 08:00:02.234164 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl" event={"ID":"e01be79b-cbb5-4540-9a1c-5d0891ed6399","Type":"ContainerDied","Data":"53b8081d7a60c7c28d76b97b32f3fec298777e492876306e229be304ee7a402a"} Feb 02 08:00:03 crc kubenswrapper[4842]: I0202 08:00:03.586439 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl" Feb 02 08:00:03 crc kubenswrapper[4842]: I0202 08:00:03.736510 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e01be79b-cbb5-4540-9a1c-5d0891ed6399-config-volume\") pod \"e01be79b-cbb5-4540-9a1c-5d0891ed6399\" (UID: \"e01be79b-cbb5-4540-9a1c-5d0891ed6399\") " Feb 02 08:00:03 crc kubenswrapper[4842]: I0202 08:00:03.736574 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e01be79b-cbb5-4540-9a1c-5d0891ed6399-secret-volume\") pod \"e01be79b-cbb5-4540-9a1c-5d0891ed6399\" (UID: \"e01be79b-cbb5-4540-9a1c-5d0891ed6399\") " Feb 02 08:00:03 crc kubenswrapper[4842]: I0202 08:00:03.736659 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gtq7\" (UniqueName: \"kubernetes.io/projected/e01be79b-cbb5-4540-9a1c-5d0891ed6399-kube-api-access-9gtq7\") pod \"e01be79b-cbb5-4540-9a1c-5d0891ed6399\" (UID: \"e01be79b-cbb5-4540-9a1c-5d0891ed6399\") " Feb 02 08:00:03 crc kubenswrapper[4842]: I0202 08:00:03.738574 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e01be79b-cbb5-4540-9a1c-5d0891ed6399-config-volume" (OuterVolumeSpecName: "config-volume") pod "e01be79b-cbb5-4540-9a1c-5d0891ed6399" (UID: "e01be79b-cbb5-4540-9a1c-5d0891ed6399"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:00:03 crc kubenswrapper[4842]: I0202 08:00:03.743834 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e01be79b-cbb5-4540-9a1c-5d0891ed6399-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e01be79b-cbb5-4540-9a1c-5d0891ed6399" (UID: "e01be79b-cbb5-4540-9a1c-5d0891ed6399"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:00:03 crc kubenswrapper[4842]: I0202 08:00:03.760981 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e01be79b-cbb5-4540-9a1c-5d0891ed6399-kube-api-access-9gtq7" (OuterVolumeSpecName: "kube-api-access-9gtq7") pod "e01be79b-cbb5-4540-9a1c-5d0891ed6399" (UID: "e01be79b-cbb5-4540-9a1c-5d0891ed6399"). InnerVolumeSpecName "kube-api-access-9gtq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:00:03 crc kubenswrapper[4842]: I0202 08:00:03.838475 4842 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e01be79b-cbb5-4540-9a1c-5d0891ed6399-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 08:00:03 crc kubenswrapper[4842]: I0202 08:00:03.838511 4842 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e01be79b-cbb5-4540-9a1c-5d0891ed6399-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 08:00:03 crc kubenswrapper[4842]: I0202 08:00:03.838522 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gtq7\" (UniqueName: \"kubernetes.io/projected/e01be79b-cbb5-4540-9a1c-5d0891ed6399-kube-api-access-9gtq7\") on node \"crc\" DevicePath \"\"" Feb 02 08:00:04 crc kubenswrapper[4842]: I0202 08:00:04.254324 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl" event={"ID":"e01be79b-cbb5-4540-9a1c-5d0891ed6399","Type":"ContainerDied","Data":"40aebe991f98f0098755eeb06cace74a285f48cd45fe5b2462d0a0f5a305f461"} Feb 02 08:00:04 crc kubenswrapper[4842]: I0202 08:00:04.254371 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40aebe991f98f0098755eeb06cace74a285f48cd45fe5b2462d0a0f5a305f461" Feb 02 08:00:04 crc kubenswrapper[4842]: I0202 08:00:04.254513 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl" Feb 02 08:00:04 crc kubenswrapper[4842]: I0202 08:00:04.678012 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb"] Feb 02 08:00:04 crc kubenswrapper[4842]: I0202 08:00:04.692572 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb"] Feb 02 08:00:05 crc kubenswrapper[4842]: I0202 08:00:05.462570 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94334935-cf80-444c-b508-8c45e9780eee" path="/var/lib/kubelet/pods/94334935-cf80-444c-b508-8c45e9780eee/volumes" Feb 02 08:00:14 crc kubenswrapper[4842]: I0202 08:00:14.433790 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 08:00:14 crc kubenswrapper[4842]: E0202 08:00:14.434694 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:00:22 crc kubenswrapper[4842]: I0202 08:00:22.396114 4842 scope.go:117] "RemoveContainer" containerID="3ec04990d6c97adea2fe95dabf427fb8df7522b562c84dbbcac33e51d0d54b26" Feb 02 08:00:26 crc kubenswrapper[4842]: I0202 08:00:26.434106 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 08:00:26 crc kubenswrapper[4842]: E0202 08:00:26.437207 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:00:39 crc kubenswrapper[4842]: I0202 08:00:39.433615 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 08:00:39 crc kubenswrapper[4842]: E0202 08:00:39.434855 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:00:51 crc kubenswrapper[4842]: I0202 08:00:51.433785 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 08:00:51 crc kubenswrapper[4842]: E0202 08:00:51.434683 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:01:02 crc kubenswrapper[4842]: I0202 08:01:02.434554 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 08:01:02 crc kubenswrapper[4842]: E0202 08:01:02.435724 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:01:13 crc kubenswrapper[4842]: I0202 08:01:13.434085 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 08:01:13 crc kubenswrapper[4842]: E0202 08:01:13.435377 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:01:24 crc kubenswrapper[4842]: I0202 08:01:24.433408 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 08:01:24 crc kubenswrapper[4842]: E0202 08:01:24.434657 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:01:38 crc kubenswrapper[4842]: I0202 08:01:38.433474 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 08:01:38 crc kubenswrapper[4842]: E0202 08:01:38.434094 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:01:52 crc kubenswrapper[4842]: I0202 08:01:52.433623 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 08:01:53 crc kubenswrapper[4842]: I0202 08:01:53.297456 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"86f88fc17737727d0ac05b52a5ad8fd0c7f09725b75fca2be56fc8f0d447e9f0"} Feb 02 08:02:42 crc kubenswrapper[4842]: I0202 08:02:42.459900 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8tgjk"] Feb 02 08:02:42 crc kubenswrapper[4842]: E0202 08:02:42.461504 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e01be79b-cbb5-4540-9a1c-5d0891ed6399" containerName="collect-profiles" Feb 02 08:02:42 crc kubenswrapper[4842]: I0202 08:02:42.461527 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="e01be79b-cbb5-4540-9a1c-5d0891ed6399" containerName="collect-profiles" Feb 02 08:02:42 crc kubenswrapper[4842]: I0202 08:02:42.461773 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="e01be79b-cbb5-4540-9a1c-5d0891ed6399" containerName="collect-profiles" Feb 02 08:02:42 crc kubenswrapper[4842]: I0202 08:02:42.463242 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:42 crc kubenswrapper[4842]: I0202 08:02:42.480064 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8tgjk"] Feb 02 08:02:42 crc kubenswrapper[4842]: I0202 08:02:42.522903 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwr9j\" (UniqueName: \"kubernetes.io/projected/bc53b20b-5fc0-438a-869d-7e76e878d5ee-kube-api-access-hwr9j\") pod \"certified-operators-8tgjk\" (UID: \"bc53b20b-5fc0-438a-869d-7e76e878d5ee\") " pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:42 crc kubenswrapper[4842]: I0202 08:02:42.522995 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc53b20b-5fc0-438a-869d-7e76e878d5ee-utilities\") pod \"certified-operators-8tgjk\" (UID: \"bc53b20b-5fc0-438a-869d-7e76e878d5ee\") " pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:42 crc kubenswrapper[4842]: I0202 08:02:42.523097 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc53b20b-5fc0-438a-869d-7e76e878d5ee-catalog-content\") pod \"certified-operators-8tgjk\" (UID: \"bc53b20b-5fc0-438a-869d-7e76e878d5ee\") " pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:42 crc kubenswrapper[4842]: I0202 08:02:42.624277 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc53b20b-5fc0-438a-869d-7e76e878d5ee-utilities\") pod \"certified-operators-8tgjk\" (UID: \"bc53b20b-5fc0-438a-869d-7e76e878d5ee\") " pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:42 crc kubenswrapper[4842]: I0202 08:02:42.624721 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc53b20b-5fc0-438a-869d-7e76e878d5ee-catalog-content\") pod \"certified-operators-8tgjk\" (UID: \"bc53b20b-5fc0-438a-869d-7e76e878d5ee\") " pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:42 crc kubenswrapper[4842]: I0202 08:02:42.624795 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwr9j\" (UniqueName: \"kubernetes.io/projected/bc53b20b-5fc0-438a-869d-7e76e878d5ee-kube-api-access-hwr9j\") pod \"certified-operators-8tgjk\" (UID: \"bc53b20b-5fc0-438a-869d-7e76e878d5ee\") " pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:42 crc kubenswrapper[4842]: I0202 08:02:42.625107 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc53b20b-5fc0-438a-869d-7e76e878d5ee-catalog-content\") pod \"certified-operators-8tgjk\" (UID: \"bc53b20b-5fc0-438a-869d-7e76e878d5ee\") " pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:42 crc kubenswrapper[4842]: I0202 08:02:42.625108 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc53b20b-5fc0-438a-869d-7e76e878d5ee-utilities\") pod \"certified-operators-8tgjk\" (UID: \"bc53b20b-5fc0-438a-869d-7e76e878d5ee\") " pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:42 crc kubenswrapper[4842]: I0202 08:02:42.664501 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwr9j\" (UniqueName: \"kubernetes.io/projected/bc53b20b-5fc0-438a-869d-7e76e878d5ee-kube-api-access-hwr9j\") pod \"certified-operators-8tgjk\" (UID: \"bc53b20b-5fc0-438a-869d-7e76e878d5ee\") " pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:42 crc kubenswrapper[4842]: I0202 08:02:42.838577 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:43 crc kubenswrapper[4842]: I0202 08:02:43.323961 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8tgjk"] Feb 02 08:02:43 crc kubenswrapper[4842]: I0202 08:02:43.779129 4842 generic.go:334] "Generic (PLEG): container finished" podID="bc53b20b-5fc0-438a-869d-7e76e878d5ee" containerID="67354cd05b48eb0f969928e72d9a5a002f76c7c6ac1495e11291d4e86f731ae8" exitCode=0 Feb 02 08:02:43 crc kubenswrapper[4842]: I0202 08:02:43.779361 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tgjk" event={"ID":"bc53b20b-5fc0-438a-869d-7e76e878d5ee","Type":"ContainerDied","Data":"67354cd05b48eb0f969928e72d9a5a002f76c7c6ac1495e11291d4e86f731ae8"} Feb 02 08:02:43 crc kubenswrapper[4842]: I0202 08:02:43.779439 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tgjk" event={"ID":"bc53b20b-5fc0-438a-869d-7e76e878d5ee","Type":"ContainerStarted","Data":"f7a5d6b272b6c311a39a8e9ddc101fbb1653df24c1f041cb73a7bef8806bea46"} Feb 02 08:02:43 crc kubenswrapper[4842]: I0202 08:02:43.781804 4842 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 08:02:44 crc kubenswrapper[4842]: I0202 08:02:44.788298 4842 generic.go:334] "Generic (PLEG): container finished" podID="bc53b20b-5fc0-438a-869d-7e76e878d5ee" containerID="632410db9e02b65d861e76302c172938d95c92d213cf6ad65798a7c600010215" exitCode=0 Feb 02 08:02:44 crc kubenswrapper[4842]: I0202 08:02:44.788354 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tgjk" event={"ID":"bc53b20b-5fc0-438a-869d-7e76e878d5ee","Type":"ContainerDied","Data":"632410db9e02b65d861e76302c172938d95c92d213cf6ad65798a7c600010215"} Feb 02 08:02:45 crc kubenswrapper[4842]: I0202 08:02:45.800376 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tgjk" event={"ID":"bc53b20b-5fc0-438a-869d-7e76e878d5ee","Type":"ContainerStarted","Data":"b38a0f525e34109d528da023856088ebc31983057cf34f454a9669800e3e5858"} Feb 02 08:02:45 crc kubenswrapper[4842]: I0202 08:02:45.830873 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8tgjk" podStartSLOduration=2.432668937 podStartE2EDuration="3.830848179s" podCreationTimestamp="2026-02-02 08:02:42 +0000 UTC" firstStartedPulling="2026-02-02 08:02:43.781528657 +0000 UTC m=+4589.158796569" lastFinishedPulling="2026-02-02 08:02:45.179707869 +0000 UTC m=+4590.556975811" observedRunningTime="2026-02-02 08:02:45.823042195 +0000 UTC m=+4591.200310137" watchObservedRunningTime="2026-02-02 08:02:45.830848179 +0000 UTC m=+4591.208116121" Feb 02 08:02:52 crc kubenswrapper[4842]: I0202 08:02:52.839726 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:52 crc kubenswrapper[4842]: I0202 08:02:52.842042 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:52 crc kubenswrapper[4842]: I0202 08:02:52.905543 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:53 crc kubenswrapper[4842]: I0202 08:02:53.931926 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:53 crc kubenswrapper[4842]: I0202 08:02:53.995313 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8tgjk"] Feb 02 08:02:55 crc kubenswrapper[4842]: I0202 08:02:55.882431 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8tgjk" podUID="bc53b20b-5fc0-438a-869d-7e76e878d5ee" containerName="registry-server" containerID="cri-o://b38a0f525e34109d528da023856088ebc31983057cf34f454a9669800e3e5858" gracePeriod=2 Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.450442 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.538132 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwr9j\" (UniqueName: \"kubernetes.io/projected/bc53b20b-5fc0-438a-869d-7e76e878d5ee-kube-api-access-hwr9j\") pod \"bc53b20b-5fc0-438a-869d-7e76e878d5ee\" (UID: \"bc53b20b-5fc0-438a-869d-7e76e878d5ee\") " Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.538287 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc53b20b-5fc0-438a-869d-7e76e878d5ee-utilities\") pod \"bc53b20b-5fc0-438a-869d-7e76e878d5ee\" (UID: \"bc53b20b-5fc0-438a-869d-7e76e878d5ee\") " Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.538304 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc53b20b-5fc0-438a-869d-7e76e878d5ee-catalog-content\") pod \"bc53b20b-5fc0-438a-869d-7e76e878d5ee\" (UID: \"bc53b20b-5fc0-438a-869d-7e76e878d5ee\") " Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.539442 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc53b20b-5fc0-438a-869d-7e76e878d5ee-utilities" (OuterVolumeSpecName: "utilities") pod "bc53b20b-5fc0-438a-869d-7e76e878d5ee" (UID: "bc53b20b-5fc0-438a-869d-7e76e878d5ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.543582 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc53b20b-5fc0-438a-869d-7e76e878d5ee-kube-api-access-hwr9j" (OuterVolumeSpecName: "kube-api-access-hwr9j") pod "bc53b20b-5fc0-438a-869d-7e76e878d5ee" (UID: "bc53b20b-5fc0-438a-869d-7e76e878d5ee"). InnerVolumeSpecName "kube-api-access-hwr9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.592006 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc53b20b-5fc0-438a-869d-7e76e878d5ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc53b20b-5fc0-438a-869d-7e76e878d5ee" (UID: "bc53b20b-5fc0-438a-869d-7e76e878d5ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.639981 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc53b20b-5fc0-438a-869d-7e76e878d5ee-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.640016 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc53b20b-5fc0-438a-869d-7e76e878d5ee-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.640028 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwr9j\" (UniqueName: \"kubernetes.io/projected/bc53b20b-5fc0-438a-869d-7e76e878d5ee-kube-api-access-hwr9j\") on node \"crc\" DevicePath \"\"" Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.892766 4842 generic.go:334] "Generic (PLEG): container finished" podID="bc53b20b-5fc0-438a-869d-7e76e878d5ee" containerID="b38a0f525e34109d528da023856088ebc31983057cf34f454a9669800e3e5858" exitCode=0 Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.892808 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tgjk" event={"ID":"bc53b20b-5fc0-438a-869d-7e76e878d5ee","Type":"ContainerDied","Data":"b38a0f525e34109d528da023856088ebc31983057cf34f454a9669800e3e5858"} Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.892842 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tgjk" event={"ID":"bc53b20b-5fc0-438a-869d-7e76e878d5ee","Type":"ContainerDied","Data":"f7a5d6b272b6c311a39a8e9ddc101fbb1653df24c1f041cb73a7bef8806bea46"} Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.892861 4842 scope.go:117] "RemoveContainer" containerID="b38a0f525e34109d528da023856088ebc31983057cf34f454a9669800e3e5858" Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.892886 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.927078 4842 scope.go:117] "RemoveContainer" containerID="632410db9e02b65d861e76302c172938d95c92d213cf6ad65798a7c600010215" Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.962880 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8tgjk"] Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.962962 4842 scope.go:117] "RemoveContainer" containerID="67354cd05b48eb0f969928e72d9a5a002f76c7c6ac1495e11291d4e86f731ae8" Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.977659 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8tgjk"] Feb 02 08:02:57 crc kubenswrapper[4842]: I0202 08:02:57.000748 4842 scope.go:117] "RemoveContainer" containerID="b38a0f525e34109d528da023856088ebc31983057cf34f454a9669800e3e5858" Feb 02 08:02:57 crc kubenswrapper[4842]: E0202 08:02:57.001327 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b38a0f525e34109d528da023856088ebc31983057cf34f454a9669800e3e5858\": container with ID starting with b38a0f525e34109d528da023856088ebc31983057cf34f454a9669800e3e5858 not found: ID does not exist" containerID="b38a0f525e34109d528da023856088ebc31983057cf34f454a9669800e3e5858" Feb 02 08:02:57 crc kubenswrapper[4842]: I0202 08:02:57.001386 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b38a0f525e34109d528da023856088ebc31983057cf34f454a9669800e3e5858"} err="failed to get container status \"b38a0f525e34109d528da023856088ebc31983057cf34f454a9669800e3e5858\": rpc error: code = NotFound desc = could not find container \"b38a0f525e34109d528da023856088ebc31983057cf34f454a9669800e3e5858\": container with ID starting with b38a0f525e34109d528da023856088ebc31983057cf34f454a9669800e3e5858 not found: ID does not exist" Feb 02 08:02:57 crc kubenswrapper[4842]: I0202 08:02:57.001425 4842 scope.go:117] "RemoveContainer" containerID="632410db9e02b65d861e76302c172938d95c92d213cf6ad65798a7c600010215" Feb 02 08:02:57 crc kubenswrapper[4842]: E0202 08:02:57.003595 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"632410db9e02b65d861e76302c172938d95c92d213cf6ad65798a7c600010215\": container with ID starting with 632410db9e02b65d861e76302c172938d95c92d213cf6ad65798a7c600010215 not found: ID does not exist" containerID="632410db9e02b65d861e76302c172938d95c92d213cf6ad65798a7c600010215" Feb 02 08:02:57 crc kubenswrapper[4842]: I0202 08:02:57.003641 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"632410db9e02b65d861e76302c172938d95c92d213cf6ad65798a7c600010215"} err="failed to get container status \"632410db9e02b65d861e76302c172938d95c92d213cf6ad65798a7c600010215\": rpc error: code = NotFound desc = could not find container \"632410db9e02b65d861e76302c172938d95c92d213cf6ad65798a7c600010215\": container with ID starting with 632410db9e02b65d861e76302c172938d95c92d213cf6ad65798a7c600010215 not found: ID does not exist" Feb 02 08:02:57 crc kubenswrapper[4842]: I0202 08:02:57.003668 4842 scope.go:117] "RemoveContainer" containerID="67354cd05b48eb0f969928e72d9a5a002f76c7c6ac1495e11291d4e86f731ae8" Feb 02 08:02:57 crc kubenswrapper[4842]: E0202 08:02:57.003894 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67354cd05b48eb0f969928e72d9a5a002f76c7c6ac1495e11291d4e86f731ae8\": container with ID starting with 67354cd05b48eb0f969928e72d9a5a002f76c7c6ac1495e11291d4e86f731ae8 not found: ID does not exist" containerID="67354cd05b48eb0f969928e72d9a5a002f76c7c6ac1495e11291d4e86f731ae8" Feb 02 08:02:57 crc kubenswrapper[4842]: I0202 08:02:57.003914 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67354cd05b48eb0f969928e72d9a5a002f76c7c6ac1495e11291d4e86f731ae8"} err="failed to get container status \"67354cd05b48eb0f969928e72d9a5a002f76c7c6ac1495e11291d4e86f731ae8\": rpc error: code = NotFound desc = could not find container \"67354cd05b48eb0f969928e72d9a5a002f76c7c6ac1495e11291d4e86f731ae8\": container with ID starting with 67354cd05b48eb0f969928e72d9a5a002f76c7c6ac1495e11291d4e86f731ae8 not found: ID does not exist" Feb 02 08:02:57 crc kubenswrapper[4842]: I0202 08:02:57.448065 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc53b20b-5fc0-438a-869d-7e76e878d5ee" path="/var/lib/kubelet/pods/bc53b20b-5fc0-438a-869d-7e76e878d5ee/volumes" Feb 02 08:04:12 crc kubenswrapper[4842]: I0202 08:04:12.146245 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:04:12 crc kubenswrapper[4842]: I0202 08:04:12.146919 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:04:42 crc kubenswrapper[4842]: I0202 08:04:42.146658 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:04:42 crc kubenswrapper[4842]: I0202 08:04:42.147715 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:05:12 crc kubenswrapper[4842]: I0202 08:05:12.146318 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:05:12 crc kubenswrapper[4842]: I0202 08:05:12.146977 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:05:12 crc kubenswrapper[4842]: I0202 08:05:12.147039 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 08:05:12 crc kubenswrapper[4842]: I0202 08:05:12.147860 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"86f88fc17737727d0ac05b52a5ad8fd0c7f09725b75fca2be56fc8f0d447e9f0"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 08:05:12 crc kubenswrapper[4842]: I0202 08:05:12.147958 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://86f88fc17737727d0ac05b52a5ad8fd0c7f09725b75fca2be56fc8f0d447e9f0" gracePeriod=600 Feb 02 08:05:12 crc kubenswrapper[4842]: I0202 08:05:12.324559 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="86f88fc17737727d0ac05b52a5ad8fd0c7f09725b75fca2be56fc8f0d447e9f0" exitCode=0 Feb 02 08:05:12 crc kubenswrapper[4842]: I0202 08:05:12.324647 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"86f88fc17737727d0ac05b52a5ad8fd0c7f09725b75fca2be56fc8f0d447e9f0"} Feb 02 08:05:12 crc kubenswrapper[4842]: I0202 08:05:12.325056 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 08:05:13 crc kubenswrapper[4842]: I0202 08:05:13.344671 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877"} Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.355887 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vr5fq"] Feb 02 08:06:41 crc kubenswrapper[4842]: E0202 08:06:41.357046 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc53b20b-5fc0-438a-869d-7e76e878d5ee" containerName="registry-server" Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.357069 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc53b20b-5fc0-438a-869d-7e76e878d5ee" containerName="registry-server" Feb 02 08:06:41 crc kubenswrapper[4842]: E0202 08:06:41.357100 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc53b20b-5fc0-438a-869d-7e76e878d5ee" containerName="extract-utilities" Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.357113 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc53b20b-5fc0-438a-869d-7e76e878d5ee" containerName="extract-utilities" Feb 02 08:06:41 crc kubenswrapper[4842]: E0202 08:06:41.357149 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc53b20b-5fc0-438a-869d-7e76e878d5ee" containerName="extract-content" Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.357161 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc53b20b-5fc0-438a-869d-7e76e878d5ee" containerName="extract-content" Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.357439 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc53b20b-5fc0-438a-869d-7e76e878d5ee" containerName="registry-server" Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.368996 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vr5fq"] Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.369192 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.498495 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a97006d-5b38-4131-8ed8-fe834ec55b0c-utilities\") pod \"community-operators-vr5fq\" (UID: \"0a97006d-5b38-4131-8ed8-fe834ec55b0c\") " pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.498700 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a97006d-5b38-4131-8ed8-fe834ec55b0c-catalog-content\") pod \"community-operators-vr5fq\" (UID: \"0a97006d-5b38-4131-8ed8-fe834ec55b0c\") " pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.498786 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcbsj\" (UniqueName: \"kubernetes.io/projected/0a97006d-5b38-4131-8ed8-fe834ec55b0c-kube-api-access-tcbsj\") pod \"community-operators-vr5fq\" (UID: \"0a97006d-5b38-4131-8ed8-fe834ec55b0c\") " pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.600346 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a97006d-5b38-4131-8ed8-fe834ec55b0c-utilities\") pod \"community-operators-vr5fq\" (UID: \"0a97006d-5b38-4131-8ed8-fe834ec55b0c\") " pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.600503 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a97006d-5b38-4131-8ed8-fe834ec55b0c-catalog-content\") pod \"community-operators-vr5fq\" (UID: \"0a97006d-5b38-4131-8ed8-fe834ec55b0c\") " pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.600576 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcbsj\" (UniqueName: \"kubernetes.io/projected/0a97006d-5b38-4131-8ed8-fe834ec55b0c-kube-api-access-tcbsj\") pod \"community-operators-vr5fq\" (UID: \"0a97006d-5b38-4131-8ed8-fe834ec55b0c\") " pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.600984 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a97006d-5b38-4131-8ed8-fe834ec55b0c-utilities\") pod \"community-operators-vr5fq\" (UID: \"0a97006d-5b38-4131-8ed8-fe834ec55b0c\") " pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.601544 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a97006d-5b38-4131-8ed8-fe834ec55b0c-catalog-content\") pod \"community-operators-vr5fq\" (UID: \"0a97006d-5b38-4131-8ed8-fe834ec55b0c\") " pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.641164 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcbsj\" (UniqueName: \"kubernetes.io/projected/0a97006d-5b38-4131-8ed8-fe834ec55b0c-kube-api-access-tcbsj\") pod \"community-operators-vr5fq\" (UID: \"0a97006d-5b38-4131-8ed8-fe834ec55b0c\") " pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.704211 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:42 crc kubenswrapper[4842]: I0202 08:06:42.274283 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vr5fq"] Feb 02 08:06:43 crc kubenswrapper[4842]: I0202 08:06:43.241061 4842 generic.go:334] "Generic (PLEG): container finished" podID="0a97006d-5b38-4131-8ed8-fe834ec55b0c" containerID="74f1703a6a6a4310d099f57bf0076e4da1c35a812538321945987e97284039d6" exitCode=0 Feb 02 08:06:43 crc kubenswrapper[4842]: I0202 08:06:43.241151 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vr5fq" event={"ID":"0a97006d-5b38-4131-8ed8-fe834ec55b0c","Type":"ContainerDied","Data":"74f1703a6a6a4310d099f57bf0076e4da1c35a812538321945987e97284039d6"} Feb 02 08:06:43 crc kubenswrapper[4842]: I0202 08:06:43.241529 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vr5fq" event={"ID":"0a97006d-5b38-4131-8ed8-fe834ec55b0c","Type":"ContainerStarted","Data":"88515406a5d093a4bdc5e334b46779b8f1794b92de340f3e5ab6bb4d8b6cc1d7"} Feb 02 08:06:45 crc kubenswrapper[4842]: I0202 08:06:45.260274 4842 generic.go:334] "Generic (PLEG): container finished" podID="0a97006d-5b38-4131-8ed8-fe834ec55b0c" containerID="1e9b31a5de2557e311a8266c07abbd83a6d11c87b8f9f6ce43241db611555db8" exitCode=0 Feb 02 08:06:45 crc kubenswrapper[4842]: I0202 08:06:45.260365 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vr5fq" event={"ID":"0a97006d-5b38-4131-8ed8-fe834ec55b0c","Type":"ContainerDied","Data":"1e9b31a5de2557e311a8266c07abbd83a6d11c87b8f9f6ce43241db611555db8"} Feb 02 08:06:46 crc kubenswrapper[4842]: I0202 08:06:46.272267 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vr5fq" event={"ID":"0a97006d-5b38-4131-8ed8-fe834ec55b0c","Type":"ContainerStarted","Data":"084616683f48c7863333226e0320ebb7a781660cb9f29b7e0bd9e87b7fb2833e"} Feb 02 08:06:46 crc kubenswrapper[4842]: I0202 08:06:46.302907 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vr5fq" podStartSLOduration=2.801329031 podStartE2EDuration="5.302882383s" podCreationTimestamp="2026-02-02 08:06:41 +0000 UTC" firstStartedPulling="2026-02-02 08:06:43.243570981 +0000 UTC m=+4828.620838933" lastFinishedPulling="2026-02-02 08:06:45.745124363 +0000 UTC m=+4831.122392285" observedRunningTime="2026-02-02 08:06:46.293853219 +0000 UTC m=+4831.671121191" watchObservedRunningTime="2026-02-02 08:06:46.302882383 +0000 UTC m=+4831.680150325" Feb 02 08:06:51 crc kubenswrapper[4842]: I0202 08:06:51.705118 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:51 crc kubenswrapper[4842]: I0202 08:06:51.705812 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:51 crc kubenswrapper[4842]: I0202 08:06:51.780126 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:52 crc kubenswrapper[4842]: I0202 08:06:52.403201 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:52 crc kubenswrapper[4842]: I0202 08:06:52.472100 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vr5fq"] Feb 02 08:06:54 crc kubenswrapper[4842]: I0202 08:06:54.344420 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vr5fq" podUID="0a97006d-5b38-4131-8ed8-fe834ec55b0c" containerName="registry-server" containerID="cri-o://084616683f48c7863333226e0320ebb7a781660cb9f29b7e0bd9e87b7fb2833e" gracePeriod=2 Feb 02 08:06:54 crc kubenswrapper[4842]: I0202 08:06:54.814199 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.013769 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a97006d-5b38-4131-8ed8-fe834ec55b0c-catalog-content\") pod \"0a97006d-5b38-4131-8ed8-fe834ec55b0c\" (UID: \"0a97006d-5b38-4131-8ed8-fe834ec55b0c\") " Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.013880 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a97006d-5b38-4131-8ed8-fe834ec55b0c-utilities\") pod \"0a97006d-5b38-4131-8ed8-fe834ec55b0c\" (UID: \"0a97006d-5b38-4131-8ed8-fe834ec55b0c\") " Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.013928 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcbsj\" (UniqueName: \"kubernetes.io/projected/0a97006d-5b38-4131-8ed8-fe834ec55b0c-kube-api-access-tcbsj\") pod \"0a97006d-5b38-4131-8ed8-fe834ec55b0c\" (UID: \"0a97006d-5b38-4131-8ed8-fe834ec55b0c\") " Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.015600 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a97006d-5b38-4131-8ed8-fe834ec55b0c-utilities" (OuterVolumeSpecName: "utilities") pod "0a97006d-5b38-4131-8ed8-fe834ec55b0c" (UID: "0a97006d-5b38-4131-8ed8-fe834ec55b0c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.024391 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a97006d-5b38-4131-8ed8-fe834ec55b0c-kube-api-access-tcbsj" (OuterVolumeSpecName: "kube-api-access-tcbsj") pod "0a97006d-5b38-4131-8ed8-fe834ec55b0c" (UID: "0a97006d-5b38-4131-8ed8-fe834ec55b0c"). InnerVolumeSpecName "kube-api-access-tcbsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.116470 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a97006d-5b38-4131-8ed8-fe834ec55b0c-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.116523 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcbsj\" (UniqueName: \"kubernetes.io/projected/0a97006d-5b38-4131-8ed8-fe834ec55b0c-kube-api-access-tcbsj\") on node \"crc\" DevicePath \"\"" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.161451 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a97006d-5b38-4131-8ed8-fe834ec55b0c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a97006d-5b38-4131-8ed8-fe834ec55b0c" (UID: "0a97006d-5b38-4131-8ed8-fe834ec55b0c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.217871 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a97006d-5b38-4131-8ed8-fe834ec55b0c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.373933 4842 generic.go:334] "Generic (PLEG): container finished" podID="0a97006d-5b38-4131-8ed8-fe834ec55b0c" containerID="084616683f48c7863333226e0320ebb7a781660cb9f29b7e0bd9e87b7fb2833e" exitCode=0 Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.373990 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vr5fq" event={"ID":"0a97006d-5b38-4131-8ed8-fe834ec55b0c","Type":"ContainerDied","Data":"084616683f48c7863333226e0320ebb7a781660cb9f29b7e0bd9e87b7fb2833e"} Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.374034 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.374082 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vr5fq" event={"ID":"0a97006d-5b38-4131-8ed8-fe834ec55b0c","Type":"ContainerDied","Data":"88515406a5d093a4bdc5e334b46779b8f1794b92de340f3e5ab6bb4d8b6cc1d7"} Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.374121 4842 scope.go:117] "RemoveContainer" containerID="084616683f48c7863333226e0320ebb7a781660cb9f29b7e0bd9e87b7fb2833e" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.409749 4842 scope.go:117] "RemoveContainer" containerID="1e9b31a5de2557e311a8266c07abbd83a6d11c87b8f9f6ce43241db611555db8" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.417417 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vr5fq"] Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.422302 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vr5fq"] Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.439640 4842 scope.go:117] "RemoveContainer" containerID="74f1703a6a6a4310d099f57bf0076e4da1c35a812538321945987e97284039d6" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.443543 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a97006d-5b38-4131-8ed8-fe834ec55b0c" path="/var/lib/kubelet/pods/0a97006d-5b38-4131-8ed8-fe834ec55b0c/volumes" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.477188 4842 scope.go:117] "RemoveContainer" containerID="084616683f48c7863333226e0320ebb7a781660cb9f29b7e0bd9e87b7fb2833e" Feb 02 08:06:55 crc kubenswrapper[4842]: E0202 08:06:55.478204 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"084616683f48c7863333226e0320ebb7a781660cb9f29b7e0bd9e87b7fb2833e\": container with ID starting with 084616683f48c7863333226e0320ebb7a781660cb9f29b7e0bd9e87b7fb2833e not found: ID does not exist" containerID="084616683f48c7863333226e0320ebb7a781660cb9f29b7e0bd9e87b7fb2833e" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.478302 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"084616683f48c7863333226e0320ebb7a781660cb9f29b7e0bd9e87b7fb2833e"} err="failed to get container status \"084616683f48c7863333226e0320ebb7a781660cb9f29b7e0bd9e87b7fb2833e\": rpc error: code = NotFound desc = could not find container \"084616683f48c7863333226e0320ebb7a781660cb9f29b7e0bd9e87b7fb2833e\": container with ID starting with 084616683f48c7863333226e0320ebb7a781660cb9f29b7e0bd9e87b7fb2833e not found: ID does not exist" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.478346 4842 scope.go:117] "RemoveContainer" containerID="1e9b31a5de2557e311a8266c07abbd83a6d11c87b8f9f6ce43241db611555db8" Feb 02 08:06:55 crc kubenswrapper[4842]: E0202 08:06:55.478949 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e9b31a5de2557e311a8266c07abbd83a6d11c87b8f9f6ce43241db611555db8\": container with ID starting with 1e9b31a5de2557e311a8266c07abbd83a6d11c87b8f9f6ce43241db611555db8 not found: ID does not exist" containerID="1e9b31a5de2557e311a8266c07abbd83a6d11c87b8f9f6ce43241db611555db8" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.479000 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e9b31a5de2557e311a8266c07abbd83a6d11c87b8f9f6ce43241db611555db8"} err="failed to get container status \"1e9b31a5de2557e311a8266c07abbd83a6d11c87b8f9f6ce43241db611555db8\": rpc error: code = NotFound desc = could not find container \"1e9b31a5de2557e311a8266c07abbd83a6d11c87b8f9f6ce43241db611555db8\": container with ID starting with 1e9b31a5de2557e311a8266c07abbd83a6d11c87b8f9f6ce43241db611555db8 not found: ID does not exist" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.479033 4842 scope.go:117] "RemoveContainer" containerID="74f1703a6a6a4310d099f57bf0076e4da1c35a812538321945987e97284039d6" Feb 02 08:06:55 crc kubenswrapper[4842]: E0202 08:06:55.479519 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74f1703a6a6a4310d099f57bf0076e4da1c35a812538321945987e97284039d6\": container with ID starting with 74f1703a6a6a4310d099f57bf0076e4da1c35a812538321945987e97284039d6 not found: ID does not exist" containerID="74f1703a6a6a4310d099f57bf0076e4da1c35a812538321945987e97284039d6" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.479566 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74f1703a6a6a4310d099f57bf0076e4da1c35a812538321945987e97284039d6"} err="failed to get container status \"74f1703a6a6a4310d099f57bf0076e4da1c35a812538321945987e97284039d6\": rpc error: code = NotFound desc = could not find container \"74f1703a6a6a4310d099f57bf0076e4da1c35a812538321945987e97284039d6\": container with ID starting with 74f1703a6a6a4310d099f57bf0076e4da1c35a812538321945987e97284039d6 not found: ID does not exist" Feb 02 08:07:12 crc kubenswrapper[4842]: I0202 08:07:12.147303 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:07:12 crc kubenswrapper[4842]: I0202 08:07:12.147919 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:07:42 crc kubenswrapper[4842]: I0202 08:07:42.145763 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:07:42 crc kubenswrapper[4842]: I0202 08:07:42.146439 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:08:12 crc kubenswrapper[4842]: I0202 08:08:12.146358 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:08:12 crc kubenswrapper[4842]: I0202 08:08:12.146987 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:08:12 crc kubenswrapper[4842]: I0202 08:08:12.147050 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 08:08:12 crc kubenswrapper[4842]: I0202 08:08:12.147784 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 08:08:12 crc kubenswrapper[4842]: I0202 08:08:12.147880 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" gracePeriod=600 Feb 02 08:08:12 crc kubenswrapper[4842]: E0202 08:08:12.295775 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:08:13 crc kubenswrapper[4842]: I0202 08:08:13.079028 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" exitCode=0 Feb 02 08:08:13 crc kubenswrapper[4842]: I0202 08:08:13.079107 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877"} Feb 02 08:08:13 crc kubenswrapper[4842]: I0202 08:08:13.079616 4842 scope.go:117] "RemoveContainer" containerID="86f88fc17737727d0ac05b52a5ad8fd0c7f09725b75fca2be56fc8f0d447e9f0" Feb 02 08:08:13 crc kubenswrapper[4842]: I0202 08:08:13.080950 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:08:13 crc kubenswrapper[4842]: E0202 08:08:13.081650 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:08:23 crc kubenswrapper[4842]: I0202 08:08:23.434062 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:08:23 crc kubenswrapper[4842]: E0202 08:08:23.435438 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:08:35 crc kubenswrapper[4842]: I0202 08:08:35.436456 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:08:35 crc kubenswrapper[4842]: E0202 08:08:35.437204 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:08:49 crc kubenswrapper[4842]: I0202 08:08:49.432989 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:08:49 crc kubenswrapper[4842]: E0202 08:08:49.433941 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:09:03 crc kubenswrapper[4842]: I0202 08:09:03.434905 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:09:03 crc kubenswrapper[4842]: E0202 08:09:03.436091 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.226621 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8dcf7"] Feb 02 08:09:16 crc kubenswrapper[4842]: E0202 08:09:16.228080 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a97006d-5b38-4131-8ed8-fe834ec55b0c" containerName="extract-utilities" Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.228112 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a97006d-5b38-4131-8ed8-fe834ec55b0c" containerName="extract-utilities" Feb 02 08:09:16 crc kubenswrapper[4842]: E0202 08:09:16.228138 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a97006d-5b38-4131-8ed8-fe834ec55b0c" containerName="extract-content" Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.228187 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a97006d-5b38-4131-8ed8-fe834ec55b0c" containerName="extract-content" Feb 02 08:09:16 crc kubenswrapper[4842]: E0202 08:09:16.228258 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a97006d-5b38-4131-8ed8-fe834ec55b0c" containerName="registry-server" Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.228280 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a97006d-5b38-4131-8ed8-fe834ec55b0c" containerName="registry-server" Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.228632 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a97006d-5b38-4131-8ed8-fe834ec55b0c" containerName="registry-server" Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.230889 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.281653 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8dcf7"] Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.346431 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxdpk\" (UniqueName: \"kubernetes.io/projected/e41679c6-e0b1-4af3-9742-8e2a44d2c736-kube-api-access-lxdpk\") pod \"redhat-operators-8dcf7\" (UID: \"e41679c6-e0b1-4af3-9742-8e2a44d2c736\") " pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.346529 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e41679c6-e0b1-4af3-9742-8e2a44d2c736-utilities\") pod \"redhat-operators-8dcf7\" (UID: \"e41679c6-e0b1-4af3-9742-8e2a44d2c736\") " pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.346586 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e41679c6-e0b1-4af3-9742-8e2a44d2c736-catalog-content\") pod \"redhat-operators-8dcf7\" (UID: \"e41679c6-e0b1-4af3-9742-8e2a44d2c736\") " pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.447431 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e41679c6-e0b1-4af3-9742-8e2a44d2c736-utilities\") pod \"redhat-operators-8dcf7\" (UID: \"e41679c6-e0b1-4af3-9742-8e2a44d2c736\") " pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.447840 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e41679c6-e0b1-4af3-9742-8e2a44d2c736-catalog-content\") pod \"redhat-operators-8dcf7\" (UID: \"e41679c6-e0b1-4af3-9742-8e2a44d2c736\") " pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.447899 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxdpk\" (UniqueName: \"kubernetes.io/projected/e41679c6-e0b1-4af3-9742-8e2a44d2c736-kube-api-access-lxdpk\") pod \"redhat-operators-8dcf7\" (UID: \"e41679c6-e0b1-4af3-9742-8e2a44d2c736\") " pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.448253 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e41679c6-e0b1-4af3-9742-8e2a44d2c736-utilities\") pod \"redhat-operators-8dcf7\" (UID: \"e41679c6-e0b1-4af3-9742-8e2a44d2c736\") " pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.448463 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e41679c6-e0b1-4af3-9742-8e2a44d2c736-catalog-content\") pod \"redhat-operators-8dcf7\" (UID: \"e41679c6-e0b1-4af3-9742-8e2a44d2c736\") " pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.484518 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxdpk\" (UniqueName: \"kubernetes.io/projected/e41679c6-e0b1-4af3-9742-8e2a44d2c736-kube-api-access-lxdpk\") pod \"redhat-operators-8dcf7\" (UID: \"e41679c6-e0b1-4af3-9742-8e2a44d2c736\") " pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.573628 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:17 crc kubenswrapper[4842]: I0202 08:09:17.035757 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8dcf7"] Feb 02 08:09:17 crc kubenswrapper[4842]: W0202 08:09:17.038313 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode41679c6_e0b1_4af3_9742_8e2a44d2c736.slice/crio-5581f9691098a63423a4c1a8df4ef94f81968e6e21d32efbf3ba93853d05b9e0 WatchSource:0}: Error finding container 5581f9691098a63423a4c1a8df4ef94f81968e6e21d32efbf3ba93853d05b9e0: Status 404 returned error can't find the container with id 5581f9691098a63423a4c1a8df4ef94f81968e6e21d32efbf3ba93853d05b9e0 Feb 02 08:09:17 crc kubenswrapper[4842]: I0202 08:09:17.434357 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:09:17 crc kubenswrapper[4842]: E0202 08:09:17.434633 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:09:17 crc kubenswrapper[4842]: I0202 08:09:17.654006 4842 generic.go:334] "Generic (PLEG): container finished" podID="e41679c6-e0b1-4af3-9742-8e2a44d2c736" containerID="86450190d719552bacf08fdef513235c3ac5663a33f7fea6fc7ebc7afe8988f5" exitCode=0 Feb 02 08:09:17 crc kubenswrapper[4842]: I0202 08:09:17.654097 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8dcf7" event={"ID":"e41679c6-e0b1-4af3-9742-8e2a44d2c736","Type":"ContainerDied","Data":"86450190d719552bacf08fdef513235c3ac5663a33f7fea6fc7ebc7afe8988f5"} Feb 02 08:09:17 crc kubenswrapper[4842]: I0202 08:09:17.654547 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8dcf7" event={"ID":"e41679c6-e0b1-4af3-9742-8e2a44d2c736","Type":"ContainerStarted","Data":"5581f9691098a63423a4c1a8df4ef94f81968e6e21d32efbf3ba93853d05b9e0"} Feb 02 08:09:17 crc kubenswrapper[4842]: I0202 08:09:17.656363 4842 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 08:09:18 crc kubenswrapper[4842]: I0202 08:09:18.667203 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8dcf7" event={"ID":"e41679c6-e0b1-4af3-9742-8e2a44d2c736","Type":"ContainerStarted","Data":"d934fbfa3d8254d2a8dad9f465dff7d420df4f26b94f8a3c9dec82c03fcbdd3a"} Feb 02 08:09:19 crc kubenswrapper[4842]: I0202 08:09:19.674882 4842 generic.go:334] "Generic (PLEG): container finished" podID="e41679c6-e0b1-4af3-9742-8e2a44d2c736" containerID="d934fbfa3d8254d2a8dad9f465dff7d420df4f26b94f8a3c9dec82c03fcbdd3a" exitCode=0 Feb 02 08:09:19 crc kubenswrapper[4842]: I0202 08:09:19.675107 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8dcf7" event={"ID":"e41679c6-e0b1-4af3-9742-8e2a44d2c736","Type":"ContainerDied","Data":"d934fbfa3d8254d2a8dad9f465dff7d420df4f26b94f8a3c9dec82c03fcbdd3a"} Feb 02 08:09:20 crc kubenswrapper[4842]: I0202 08:09:20.718970 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8dcf7" podStartSLOduration=2.28792685 podStartE2EDuration="4.718936489s" podCreationTimestamp="2026-02-02 08:09:16 +0000 UTC" firstStartedPulling="2026-02-02 08:09:17.656023558 +0000 UTC m=+4983.033291480" lastFinishedPulling="2026-02-02 08:09:20.087033197 +0000 UTC m=+4985.464301119" observedRunningTime="2026-02-02 08:09:20.71777368 +0000 UTC m=+4986.095041622" watchObservedRunningTime="2026-02-02 08:09:20.718936489 +0000 UTC m=+4986.096204451" Feb 02 08:09:21 crc kubenswrapper[4842]: I0202 08:09:21.697853 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8dcf7" event={"ID":"e41679c6-e0b1-4af3-9742-8e2a44d2c736","Type":"ContainerStarted","Data":"40bd0d6145b4819d49a98c31d3167c4ca0d09bfb8187c750e6f81b817a98be67"} Feb 02 08:09:26 crc kubenswrapper[4842]: I0202 08:09:26.578060 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:26 crc kubenswrapper[4842]: I0202 08:09:26.578635 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:27 crc kubenswrapper[4842]: I0202 08:09:27.642591 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8dcf7" podUID="e41679c6-e0b1-4af3-9742-8e2a44d2c736" containerName="registry-server" probeResult="failure" output=< Feb 02 08:09:27 crc kubenswrapper[4842]: timeout: failed to connect service ":50051" within 1s Feb 02 08:09:27 crc kubenswrapper[4842]: > Feb 02 08:09:29 crc kubenswrapper[4842]: I0202 08:09:29.434067 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:09:29 crc kubenswrapper[4842]: E0202 08:09:29.434936 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:09:36 crc kubenswrapper[4842]: I0202 08:09:36.655952 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:36 crc kubenswrapper[4842]: I0202 08:09:36.714734 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:36 crc kubenswrapper[4842]: I0202 08:09:36.903823 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8dcf7"] Feb 02 08:09:37 crc kubenswrapper[4842]: I0202 08:09:37.831089 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8dcf7" podUID="e41679c6-e0b1-4af3-9742-8e2a44d2c736" containerName="registry-server" containerID="cri-o://40bd0d6145b4819d49a98c31d3167c4ca0d09bfb8187c750e6f81b817a98be67" gracePeriod=2 Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.283210 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.387694 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e41679c6-e0b1-4af3-9742-8e2a44d2c736-catalog-content\") pod \"e41679c6-e0b1-4af3-9742-8e2a44d2c736\" (UID: \"e41679c6-e0b1-4af3-9742-8e2a44d2c736\") " Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.387758 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e41679c6-e0b1-4af3-9742-8e2a44d2c736-utilities\") pod \"e41679c6-e0b1-4af3-9742-8e2a44d2c736\" (UID: \"e41679c6-e0b1-4af3-9742-8e2a44d2c736\") " Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.387992 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxdpk\" (UniqueName: \"kubernetes.io/projected/e41679c6-e0b1-4af3-9742-8e2a44d2c736-kube-api-access-lxdpk\") pod \"e41679c6-e0b1-4af3-9742-8e2a44d2c736\" (UID: \"e41679c6-e0b1-4af3-9742-8e2a44d2c736\") " Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.390030 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e41679c6-e0b1-4af3-9742-8e2a44d2c736-utilities" (OuterVolumeSpecName: "utilities") pod "e41679c6-e0b1-4af3-9742-8e2a44d2c736" (UID: "e41679c6-e0b1-4af3-9742-8e2a44d2c736"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.394251 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e41679c6-e0b1-4af3-9742-8e2a44d2c736-kube-api-access-lxdpk" (OuterVolumeSpecName: "kube-api-access-lxdpk") pod "e41679c6-e0b1-4af3-9742-8e2a44d2c736" (UID: "e41679c6-e0b1-4af3-9742-8e2a44d2c736"). InnerVolumeSpecName "kube-api-access-lxdpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.494150 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxdpk\" (UniqueName: \"kubernetes.io/projected/e41679c6-e0b1-4af3-9742-8e2a44d2c736-kube-api-access-lxdpk\") on node \"crc\" DevicePath \"\"" Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.494204 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e41679c6-e0b1-4af3-9742-8e2a44d2c736-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.574937 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e41679c6-e0b1-4af3-9742-8e2a44d2c736-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e41679c6-e0b1-4af3-9742-8e2a44d2c736" (UID: "e41679c6-e0b1-4af3-9742-8e2a44d2c736"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.595948 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e41679c6-e0b1-4af3-9742-8e2a44d2c736-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.843205 4842 generic.go:334] "Generic (PLEG): container finished" podID="e41679c6-e0b1-4af3-9742-8e2a44d2c736" containerID="40bd0d6145b4819d49a98c31d3167c4ca0d09bfb8187c750e6f81b817a98be67" exitCode=0 Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.843280 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8dcf7" event={"ID":"e41679c6-e0b1-4af3-9742-8e2a44d2c736","Type":"ContainerDied","Data":"40bd0d6145b4819d49a98c31d3167c4ca0d09bfb8187c750e6f81b817a98be67"} Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.843306 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.843352 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8dcf7" event={"ID":"e41679c6-e0b1-4af3-9742-8e2a44d2c736","Type":"ContainerDied","Data":"5581f9691098a63423a4c1a8df4ef94f81968e6e21d32efbf3ba93853d05b9e0"} Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.843386 4842 scope.go:117] "RemoveContainer" containerID="40bd0d6145b4819d49a98c31d3167c4ca0d09bfb8187c750e6f81b817a98be67" Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.881089 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8dcf7"] Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.881426 4842 scope.go:117] "RemoveContainer" containerID="d934fbfa3d8254d2a8dad9f465dff7d420df4f26b94f8a3c9dec82c03fcbdd3a" Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.888949 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8dcf7"] Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.913347 4842 scope.go:117] "RemoveContainer" containerID="86450190d719552bacf08fdef513235c3ac5663a33f7fea6fc7ebc7afe8988f5" Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.939364 4842 scope.go:117] "RemoveContainer" containerID="40bd0d6145b4819d49a98c31d3167c4ca0d09bfb8187c750e6f81b817a98be67" Feb 02 08:09:38 crc kubenswrapper[4842]: E0202 08:09:38.940861 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40bd0d6145b4819d49a98c31d3167c4ca0d09bfb8187c750e6f81b817a98be67\": container with ID starting with 40bd0d6145b4819d49a98c31d3167c4ca0d09bfb8187c750e6f81b817a98be67 not found: ID does not exist" containerID="40bd0d6145b4819d49a98c31d3167c4ca0d09bfb8187c750e6f81b817a98be67" Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.940927 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40bd0d6145b4819d49a98c31d3167c4ca0d09bfb8187c750e6f81b817a98be67"} err="failed to get container status \"40bd0d6145b4819d49a98c31d3167c4ca0d09bfb8187c750e6f81b817a98be67\": rpc error: code = NotFound desc = could not find container \"40bd0d6145b4819d49a98c31d3167c4ca0d09bfb8187c750e6f81b817a98be67\": container with ID starting with 40bd0d6145b4819d49a98c31d3167c4ca0d09bfb8187c750e6f81b817a98be67 not found: ID does not exist" Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.940971 4842 scope.go:117] "RemoveContainer" containerID="d934fbfa3d8254d2a8dad9f465dff7d420df4f26b94f8a3c9dec82c03fcbdd3a" Feb 02 08:09:38 crc kubenswrapper[4842]: E0202 08:09:38.941576 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d934fbfa3d8254d2a8dad9f465dff7d420df4f26b94f8a3c9dec82c03fcbdd3a\": container with ID starting with d934fbfa3d8254d2a8dad9f465dff7d420df4f26b94f8a3c9dec82c03fcbdd3a not found: ID does not exist" containerID="d934fbfa3d8254d2a8dad9f465dff7d420df4f26b94f8a3c9dec82c03fcbdd3a" Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.941635 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d934fbfa3d8254d2a8dad9f465dff7d420df4f26b94f8a3c9dec82c03fcbdd3a"} err="failed to get container status \"d934fbfa3d8254d2a8dad9f465dff7d420df4f26b94f8a3c9dec82c03fcbdd3a\": rpc error: code = NotFound desc = could not find container \"d934fbfa3d8254d2a8dad9f465dff7d420df4f26b94f8a3c9dec82c03fcbdd3a\": container with ID starting with d934fbfa3d8254d2a8dad9f465dff7d420df4f26b94f8a3c9dec82c03fcbdd3a not found: ID does not exist" Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.941676 4842 scope.go:117] "RemoveContainer" containerID="86450190d719552bacf08fdef513235c3ac5663a33f7fea6fc7ebc7afe8988f5" Feb 02 08:09:38 crc kubenswrapper[4842]: E0202 08:09:38.942126 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86450190d719552bacf08fdef513235c3ac5663a33f7fea6fc7ebc7afe8988f5\": container with ID starting with 86450190d719552bacf08fdef513235c3ac5663a33f7fea6fc7ebc7afe8988f5 not found: ID does not exist" containerID="86450190d719552bacf08fdef513235c3ac5663a33f7fea6fc7ebc7afe8988f5" Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.942171 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86450190d719552bacf08fdef513235c3ac5663a33f7fea6fc7ebc7afe8988f5"} err="failed to get container status \"86450190d719552bacf08fdef513235c3ac5663a33f7fea6fc7ebc7afe8988f5\": rpc error: code = NotFound desc = could not find container \"86450190d719552bacf08fdef513235c3ac5663a33f7fea6fc7ebc7afe8988f5\": container with ID starting with 86450190d719552bacf08fdef513235c3ac5663a33f7fea6fc7ebc7afe8988f5 not found: ID does not exist" Feb 02 08:09:39 crc kubenswrapper[4842]: I0202 08:09:39.448533 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e41679c6-e0b1-4af3-9742-8e2a44d2c736" path="/var/lib/kubelet/pods/e41679c6-e0b1-4af3-9742-8e2a44d2c736/volumes" Feb 02 08:09:40 crc kubenswrapper[4842]: I0202 08:09:40.433585 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:09:40 crc kubenswrapper[4842]: E0202 08:09:40.433813 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:09:54 crc kubenswrapper[4842]: I0202 08:09:54.434345 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:09:54 crc kubenswrapper[4842]: E0202 08:09:54.435436 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:10:09 crc kubenswrapper[4842]: I0202 08:10:09.434091 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:10:09 crc kubenswrapper[4842]: E0202 08:10:09.434884 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:10:23 crc kubenswrapper[4842]: I0202 08:10:23.434645 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:10:23 crc kubenswrapper[4842]: E0202 08:10:23.435618 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:10:34 crc kubenswrapper[4842]: I0202 08:10:34.434350 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:10:34 crc kubenswrapper[4842]: E0202 08:10:34.435465 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:10:47 crc kubenswrapper[4842]: I0202 08:10:47.433526 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:10:47 crc kubenswrapper[4842]: E0202 08:10:47.434447 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:10:58 crc kubenswrapper[4842]: I0202 08:10:58.433850 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:10:58 crc kubenswrapper[4842]: E0202 08:10:58.435101 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:11:12 crc kubenswrapper[4842]: I0202 08:11:12.433386 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:11:12 crc kubenswrapper[4842]: E0202 08:11:12.436478 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:11:25 crc kubenswrapper[4842]: I0202 08:11:25.436980 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:11:25 crc kubenswrapper[4842]: E0202 08:11:25.437669 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:11:40 crc kubenswrapper[4842]: I0202 08:11:40.434347 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:11:40 crc kubenswrapper[4842]: E0202 08:11:40.436426 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:11:51 crc kubenswrapper[4842]: I0202 08:11:51.433897 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:11:51 crc kubenswrapper[4842]: E0202 08:11:51.434897 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:12:04 crc kubenswrapper[4842]: I0202 08:12:04.434541 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:12:04 crc kubenswrapper[4842]: E0202 08:12:04.435691 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.452844 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jdh6g"] Feb 02 08:12:18 crc kubenswrapper[4842]: E0202 08:12:18.453900 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e41679c6-e0b1-4af3-9742-8e2a44d2c736" containerName="registry-server" Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.453921 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="e41679c6-e0b1-4af3-9742-8e2a44d2c736" containerName="registry-server" Feb 02 08:12:18 crc kubenswrapper[4842]: E0202 08:12:18.453953 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e41679c6-e0b1-4af3-9742-8e2a44d2c736" containerName="extract-utilities" Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.453965 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="e41679c6-e0b1-4af3-9742-8e2a44d2c736" containerName="extract-utilities" Feb 02 08:12:18 crc kubenswrapper[4842]: E0202 08:12:18.453994 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e41679c6-e0b1-4af3-9742-8e2a44d2c736" containerName="extract-content" Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.454007 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="e41679c6-e0b1-4af3-9742-8e2a44d2c736" containerName="extract-content" Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.454281 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="e41679c6-e0b1-4af3-9742-8e2a44d2c736" containerName="registry-server" Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.455973 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.473176 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdh6g"] Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.596211 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08fe0e32-0a1d-4dee-8242-5f813885ae92-catalog-content\") pod \"redhat-marketplace-jdh6g\" (UID: \"08fe0e32-0a1d-4dee-8242-5f813885ae92\") " pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.596465 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08fe0e32-0a1d-4dee-8242-5f813885ae92-utilities\") pod \"redhat-marketplace-jdh6g\" (UID: \"08fe0e32-0a1d-4dee-8242-5f813885ae92\") " pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.596535 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9vnt\" (UniqueName: \"kubernetes.io/projected/08fe0e32-0a1d-4dee-8242-5f813885ae92-kube-api-access-x9vnt\") pod \"redhat-marketplace-jdh6g\" (UID: \"08fe0e32-0a1d-4dee-8242-5f813885ae92\") " pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.697786 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08fe0e32-0a1d-4dee-8242-5f813885ae92-catalog-content\") pod \"redhat-marketplace-jdh6g\" (UID: \"08fe0e32-0a1d-4dee-8242-5f813885ae92\") " pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.698088 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08fe0e32-0a1d-4dee-8242-5f813885ae92-utilities\") pod \"redhat-marketplace-jdh6g\" (UID: \"08fe0e32-0a1d-4dee-8242-5f813885ae92\") " pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.698119 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9vnt\" (UniqueName: \"kubernetes.io/projected/08fe0e32-0a1d-4dee-8242-5f813885ae92-kube-api-access-x9vnt\") pod \"redhat-marketplace-jdh6g\" (UID: \"08fe0e32-0a1d-4dee-8242-5f813885ae92\") " pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.698717 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08fe0e32-0a1d-4dee-8242-5f813885ae92-catalog-content\") pod \"redhat-marketplace-jdh6g\" (UID: \"08fe0e32-0a1d-4dee-8242-5f813885ae92\") " pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.698976 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08fe0e32-0a1d-4dee-8242-5f813885ae92-utilities\") pod \"redhat-marketplace-jdh6g\" (UID: \"08fe0e32-0a1d-4dee-8242-5f813885ae92\") " pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.721467 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9vnt\" (UniqueName: \"kubernetes.io/projected/08fe0e32-0a1d-4dee-8242-5f813885ae92-kube-api-access-x9vnt\") pod \"redhat-marketplace-jdh6g\" (UID: \"08fe0e32-0a1d-4dee-8242-5f813885ae92\") " pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.781744 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:19 crc kubenswrapper[4842]: I0202 08:12:19.039989 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdh6g"] Feb 02 08:12:19 crc kubenswrapper[4842]: I0202 08:12:19.407949 4842 generic.go:334] "Generic (PLEG): container finished" podID="08fe0e32-0a1d-4dee-8242-5f813885ae92" containerID="d173222a1b93229a2167679b79ba0b7008b287f3887900e6d34137172b1e7d5e" exitCode=0 Feb 02 08:12:19 crc kubenswrapper[4842]: I0202 08:12:19.407987 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdh6g" event={"ID":"08fe0e32-0a1d-4dee-8242-5f813885ae92","Type":"ContainerDied","Data":"d173222a1b93229a2167679b79ba0b7008b287f3887900e6d34137172b1e7d5e"} Feb 02 08:12:19 crc kubenswrapper[4842]: I0202 08:12:19.408010 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdh6g" event={"ID":"08fe0e32-0a1d-4dee-8242-5f813885ae92","Type":"ContainerStarted","Data":"7a8b719038fc8601b3c12eed556bae843f74499f666b80fc5215969cd88e23aa"} Feb 02 08:12:19 crc kubenswrapper[4842]: I0202 08:12:19.439892 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:12:19 crc kubenswrapper[4842]: E0202 08:12:19.440250 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:12:20 crc kubenswrapper[4842]: I0202 08:12:20.423051 4842 generic.go:334] "Generic (PLEG): container finished" podID="08fe0e32-0a1d-4dee-8242-5f813885ae92" containerID="77d53c7a64a83a74b20f3a149e50f5da523b040fa07bae890a57f2d5db21ed2a" exitCode=0 Feb 02 08:12:20 crc kubenswrapper[4842]: I0202 08:12:20.423152 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdh6g" event={"ID":"08fe0e32-0a1d-4dee-8242-5f813885ae92","Type":"ContainerDied","Data":"77d53c7a64a83a74b20f3a149e50f5da523b040fa07bae890a57f2d5db21ed2a"} Feb 02 08:12:21 crc kubenswrapper[4842]: I0202 08:12:21.447975 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdh6g" event={"ID":"08fe0e32-0a1d-4dee-8242-5f813885ae92","Type":"ContainerStarted","Data":"85674eb676513f5c2ba51b70084f9f9ccfa37e6c634681d39a3c85668fdac0f5"} Feb 02 08:12:21 crc kubenswrapper[4842]: I0202 08:12:21.457556 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jdh6g" podStartSLOduration=2.021180834 podStartE2EDuration="3.457536613s" podCreationTimestamp="2026-02-02 08:12:18 +0000 UTC" firstStartedPulling="2026-02-02 08:12:19.409592666 +0000 UTC m=+5164.786860578" lastFinishedPulling="2026-02-02 08:12:20.845948405 +0000 UTC m=+5166.223216357" observedRunningTime="2026-02-02 08:12:21.455735748 +0000 UTC m=+5166.833003680" watchObservedRunningTime="2026-02-02 08:12:21.457536613 +0000 UTC m=+5166.834804535" Feb 02 08:12:28 crc kubenswrapper[4842]: I0202 08:12:28.782723 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:28 crc kubenswrapper[4842]: I0202 08:12:28.783169 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:28 crc kubenswrapper[4842]: I0202 08:12:28.854272 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:29 crc kubenswrapper[4842]: I0202 08:12:29.584153 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:29 crc kubenswrapper[4842]: I0202 08:12:29.648088 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdh6g"] Feb 02 08:12:31 crc kubenswrapper[4842]: I0202 08:12:31.536461 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jdh6g" podUID="08fe0e32-0a1d-4dee-8242-5f813885ae92" containerName="registry-server" containerID="cri-o://85674eb676513f5c2ba51b70084f9f9ccfa37e6c634681d39a3c85668fdac0f5" gracePeriod=2 Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.051351 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.108865 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08fe0e32-0a1d-4dee-8242-5f813885ae92-utilities\") pod \"08fe0e32-0a1d-4dee-8242-5f813885ae92\" (UID: \"08fe0e32-0a1d-4dee-8242-5f813885ae92\") " Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.109084 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9vnt\" (UniqueName: \"kubernetes.io/projected/08fe0e32-0a1d-4dee-8242-5f813885ae92-kube-api-access-x9vnt\") pod \"08fe0e32-0a1d-4dee-8242-5f813885ae92\" (UID: \"08fe0e32-0a1d-4dee-8242-5f813885ae92\") " Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.109107 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08fe0e32-0a1d-4dee-8242-5f813885ae92-catalog-content\") pod \"08fe0e32-0a1d-4dee-8242-5f813885ae92\" (UID: \"08fe0e32-0a1d-4dee-8242-5f813885ae92\") " Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.110555 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08fe0e32-0a1d-4dee-8242-5f813885ae92-utilities" (OuterVolumeSpecName: "utilities") pod "08fe0e32-0a1d-4dee-8242-5f813885ae92" (UID: "08fe0e32-0a1d-4dee-8242-5f813885ae92"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.122526 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08fe0e32-0a1d-4dee-8242-5f813885ae92-kube-api-access-x9vnt" (OuterVolumeSpecName: "kube-api-access-x9vnt") pod "08fe0e32-0a1d-4dee-8242-5f813885ae92" (UID: "08fe0e32-0a1d-4dee-8242-5f813885ae92"). InnerVolumeSpecName "kube-api-access-x9vnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.141417 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08fe0e32-0a1d-4dee-8242-5f813885ae92-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08fe0e32-0a1d-4dee-8242-5f813885ae92" (UID: "08fe0e32-0a1d-4dee-8242-5f813885ae92"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.212019 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9vnt\" (UniqueName: \"kubernetes.io/projected/08fe0e32-0a1d-4dee-8242-5f813885ae92-kube-api-access-x9vnt\") on node \"crc\" DevicePath \"\"" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.212054 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08fe0e32-0a1d-4dee-8242-5f813885ae92-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.212069 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08fe0e32-0a1d-4dee-8242-5f813885ae92-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.434458 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:12:32 crc kubenswrapper[4842]: E0202 08:12:32.434871 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.548453 4842 generic.go:334] "Generic (PLEG): container finished" podID="08fe0e32-0a1d-4dee-8242-5f813885ae92" containerID="85674eb676513f5c2ba51b70084f9f9ccfa37e6c634681d39a3c85668fdac0f5" exitCode=0 Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.548503 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdh6g" event={"ID":"08fe0e32-0a1d-4dee-8242-5f813885ae92","Type":"ContainerDied","Data":"85674eb676513f5c2ba51b70084f9f9ccfa37e6c634681d39a3c85668fdac0f5"} Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.548581 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdh6g" event={"ID":"08fe0e32-0a1d-4dee-8242-5f813885ae92","Type":"ContainerDied","Data":"7a8b719038fc8601b3c12eed556bae843f74499f666b80fc5215969cd88e23aa"} Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.548611 4842 scope.go:117] "RemoveContainer" containerID="85674eb676513f5c2ba51b70084f9f9ccfa37e6c634681d39a3c85668fdac0f5" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.549342 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.593157 4842 scope.go:117] "RemoveContainer" containerID="77d53c7a64a83a74b20f3a149e50f5da523b040fa07bae890a57f2d5db21ed2a" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.617280 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdh6g"] Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.626160 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdh6g"] Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.647064 4842 scope.go:117] "RemoveContainer" containerID="d173222a1b93229a2167679b79ba0b7008b287f3887900e6d34137172b1e7d5e" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.679270 4842 scope.go:117] "RemoveContainer" containerID="85674eb676513f5c2ba51b70084f9f9ccfa37e6c634681d39a3c85668fdac0f5" Feb 02 08:12:32 crc kubenswrapper[4842]: E0202 08:12:32.679843 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85674eb676513f5c2ba51b70084f9f9ccfa37e6c634681d39a3c85668fdac0f5\": container with ID starting with 85674eb676513f5c2ba51b70084f9f9ccfa37e6c634681d39a3c85668fdac0f5 not found: ID does not exist" containerID="85674eb676513f5c2ba51b70084f9f9ccfa37e6c634681d39a3c85668fdac0f5" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.679971 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85674eb676513f5c2ba51b70084f9f9ccfa37e6c634681d39a3c85668fdac0f5"} err="failed to get container status \"85674eb676513f5c2ba51b70084f9f9ccfa37e6c634681d39a3c85668fdac0f5\": rpc error: code = NotFound desc = could not find container \"85674eb676513f5c2ba51b70084f9f9ccfa37e6c634681d39a3c85668fdac0f5\": container with ID starting with 85674eb676513f5c2ba51b70084f9f9ccfa37e6c634681d39a3c85668fdac0f5 not found: ID does not exist" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.680055 4842 scope.go:117] "RemoveContainer" containerID="77d53c7a64a83a74b20f3a149e50f5da523b040fa07bae890a57f2d5db21ed2a" Feb 02 08:12:32 crc kubenswrapper[4842]: E0202 08:12:32.680582 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77d53c7a64a83a74b20f3a149e50f5da523b040fa07bae890a57f2d5db21ed2a\": container with ID starting with 77d53c7a64a83a74b20f3a149e50f5da523b040fa07bae890a57f2d5db21ed2a not found: ID does not exist" containerID="77d53c7a64a83a74b20f3a149e50f5da523b040fa07bae890a57f2d5db21ed2a" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.680624 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77d53c7a64a83a74b20f3a149e50f5da523b040fa07bae890a57f2d5db21ed2a"} err="failed to get container status \"77d53c7a64a83a74b20f3a149e50f5da523b040fa07bae890a57f2d5db21ed2a\": rpc error: code = NotFound desc = could not find container \"77d53c7a64a83a74b20f3a149e50f5da523b040fa07bae890a57f2d5db21ed2a\": container with ID starting with 77d53c7a64a83a74b20f3a149e50f5da523b040fa07bae890a57f2d5db21ed2a not found: ID does not exist" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.680650 4842 scope.go:117] "RemoveContainer" containerID="d173222a1b93229a2167679b79ba0b7008b287f3887900e6d34137172b1e7d5e" Feb 02 08:12:32 crc kubenswrapper[4842]: E0202 08:12:32.681125 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d173222a1b93229a2167679b79ba0b7008b287f3887900e6d34137172b1e7d5e\": container with ID starting with d173222a1b93229a2167679b79ba0b7008b287f3887900e6d34137172b1e7d5e not found: ID does not exist" containerID="d173222a1b93229a2167679b79ba0b7008b287f3887900e6d34137172b1e7d5e" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.681159 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d173222a1b93229a2167679b79ba0b7008b287f3887900e6d34137172b1e7d5e"} err="failed to get container status \"d173222a1b93229a2167679b79ba0b7008b287f3887900e6d34137172b1e7d5e\": rpc error: code = NotFound desc = could not find container \"d173222a1b93229a2167679b79ba0b7008b287f3887900e6d34137172b1e7d5e\": container with ID starting with d173222a1b93229a2167679b79ba0b7008b287f3887900e6d34137172b1e7d5e not found: ID does not exist" Feb 02 08:12:33 crc kubenswrapper[4842]: I0202 08:12:33.446072 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08fe0e32-0a1d-4dee-8242-5f813885ae92" path="/var/lib/kubelet/pods/08fe0e32-0a1d-4dee-8242-5f813885ae92/volumes" Feb 02 08:12:44 crc kubenswrapper[4842]: I0202 08:12:44.433569 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:12:44 crc kubenswrapper[4842]: E0202 08:12:44.435327 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.015183 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jnhtv"] Feb 02 08:12:57 crc kubenswrapper[4842]: E0202 08:12:57.016283 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08fe0e32-0a1d-4dee-8242-5f813885ae92" containerName="extract-utilities" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.016307 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="08fe0e32-0a1d-4dee-8242-5f813885ae92" containerName="extract-utilities" Feb 02 08:12:57 crc kubenswrapper[4842]: E0202 08:12:57.016328 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08fe0e32-0a1d-4dee-8242-5f813885ae92" containerName="registry-server" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.016341 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="08fe0e32-0a1d-4dee-8242-5f813885ae92" containerName="registry-server" Feb 02 08:12:57 crc kubenswrapper[4842]: E0202 08:12:57.016388 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08fe0e32-0a1d-4dee-8242-5f813885ae92" containerName="extract-content" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.016403 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="08fe0e32-0a1d-4dee-8242-5f813885ae92" containerName="extract-content" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.016656 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="08fe0e32-0a1d-4dee-8242-5f813885ae92" containerName="registry-server" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.018425 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.050764 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jnhtv"] Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.100243 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-catalog-content\") pod \"certified-operators-jnhtv\" (UID: \"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e\") " pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.100740 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-utilities\") pod \"certified-operators-jnhtv\" (UID: \"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e\") " pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.101011 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb4vm\" (UniqueName: \"kubernetes.io/projected/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-kube-api-access-hb4vm\") pod \"certified-operators-jnhtv\" (UID: \"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e\") " pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.202386 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-catalog-content\") pod \"certified-operators-jnhtv\" (UID: \"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e\") " pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.202451 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-utilities\") pod \"certified-operators-jnhtv\" (UID: \"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e\") " pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.202479 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb4vm\" (UniqueName: \"kubernetes.io/projected/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-kube-api-access-hb4vm\") pod \"certified-operators-jnhtv\" (UID: \"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e\") " pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.203317 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-catalog-content\") pod \"certified-operators-jnhtv\" (UID: \"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e\") " pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.203758 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-utilities\") pod \"certified-operators-jnhtv\" (UID: \"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e\") " pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.229188 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb4vm\" (UniqueName: \"kubernetes.io/projected/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-kube-api-access-hb4vm\") pod \"certified-operators-jnhtv\" (UID: \"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e\") " pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.341500 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.844281 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jnhtv"] Feb 02 08:12:58 crc kubenswrapper[4842]: I0202 08:12:58.434411 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:12:58 crc kubenswrapper[4842]: E0202 08:12:58.435123 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:12:58 crc kubenswrapper[4842]: I0202 08:12:58.762757 4842 generic.go:334] "Generic (PLEG): container finished" podID="3332dfd1-239f-40e8-9ffa-b2dfb4c6422e" containerID="5eef8870234cc24f518c7d89449ab0f23cf589da5ca2e179ea111d23b60faed7" exitCode=0 Feb 02 08:12:58 crc kubenswrapper[4842]: I0202 08:12:58.762801 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnhtv" event={"ID":"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e","Type":"ContainerDied","Data":"5eef8870234cc24f518c7d89449ab0f23cf589da5ca2e179ea111d23b60faed7"} Feb 02 08:12:58 crc kubenswrapper[4842]: I0202 08:12:58.762844 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnhtv" event={"ID":"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e","Type":"ContainerStarted","Data":"e91d24bafb4ba23512441976977d4c1e9d4f0c5bb10c4601f8ae397a869e1aac"} Feb 02 08:12:59 crc kubenswrapper[4842]: I0202 08:12:59.776333 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnhtv" event={"ID":"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e","Type":"ContainerStarted","Data":"f681a043c6757347836bffe4624a97dab090d16c617ca0e5e56a88bb027ade1e"} Feb 02 08:13:00 crc kubenswrapper[4842]: I0202 08:13:00.787904 4842 generic.go:334] "Generic (PLEG): container finished" podID="3332dfd1-239f-40e8-9ffa-b2dfb4c6422e" containerID="f681a043c6757347836bffe4624a97dab090d16c617ca0e5e56a88bb027ade1e" exitCode=0 Feb 02 08:13:00 crc kubenswrapper[4842]: I0202 08:13:00.787985 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnhtv" event={"ID":"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e","Type":"ContainerDied","Data":"f681a043c6757347836bffe4624a97dab090d16c617ca0e5e56a88bb027ade1e"} Feb 02 08:13:01 crc kubenswrapper[4842]: I0202 08:13:01.800682 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnhtv" event={"ID":"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e","Type":"ContainerStarted","Data":"f02b169bc6d4f81a6e1fcbcc36e6fbc71482f39df6e66ff1593f2aeefb59c4f7"} Feb 02 08:13:01 crc kubenswrapper[4842]: I0202 08:13:01.823023 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jnhtv" podStartSLOduration=3.3910354 podStartE2EDuration="5.823007434s" podCreationTimestamp="2026-02-02 08:12:56 +0000 UTC" firstStartedPulling="2026-02-02 08:12:58.764158503 +0000 UTC m=+5204.141426415" lastFinishedPulling="2026-02-02 08:13:01.196130537 +0000 UTC m=+5206.573398449" observedRunningTime="2026-02-02 08:13:01.819458536 +0000 UTC m=+5207.196726458" watchObservedRunningTime="2026-02-02 08:13:01.823007434 +0000 UTC m=+5207.200275346" Feb 02 08:13:07 crc kubenswrapper[4842]: I0202 08:13:07.341700 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:13:07 crc kubenswrapper[4842]: I0202 08:13:07.342541 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:13:07 crc kubenswrapper[4842]: I0202 08:13:07.407892 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:13:07 crc kubenswrapper[4842]: I0202 08:13:07.923595 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:13:08 crc kubenswrapper[4842]: I0202 08:13:08.042009 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jnhtv"] Feb 02 08:13:09 crc kubenswrapper[4842]: I0202 08:13:09.871495 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jnhtv" podUID="3332dfd1-239f-40e8-9ffa-b2dfb4c6422e" containerName="registry-server" containerID="cri-o://f02b169bc6d4f81a6e1fcbcc36e6fbc71482f39df6e66ff1593f2aeefb59c4f7" gracePeriod=2 Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.428388 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.558987 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-catalog-content\") pod \"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e\" (UID: \"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e\") " Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.559107 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb4vm\" (UniqueName: \"kubernetes.io/projected/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-kube-api-access-hb4vm\") pod \"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e\" (UID: \"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e\") " Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.559372 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-utilities\") pod \"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e\" (UID: \"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e\") " Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.560088 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-utilities" (OuterVolumeSpecName: "utilities") pod "3332dfd1-239f-40e8-9ffa-b2dfb4c6422e" (UID: "3332dfd1-239f-40e8-9ffa-b2dfb4c6422e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.561009 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.568363 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-kube-api-access-hb4vm" (OuterVolumeSpecName: "kube-api-access-hb4vm") pod "3332dfd1-239f-40e8-9ffa-b2dfb4c6422e" (UID: "3332dfd1-239f-40e8-9ffa-b2dfb4c6422e"). InnerVolumeSpecName "kube-api-access-hb4vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.630668 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3332dfd1-239f-40e8-9ffa-b2dfb4c6422e" (UID: "3332dfd1-239f-40e8-9ffa-b2dfb4c6422e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.661986 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.662018 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb4vm\" (UniqueName: \"kubernetes.io/projected/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-kube-api-access-hb4vm\") on node \"crc\" DevicePath \"\"" Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.878349 4842 generic.go:334] "Generic (PLEG): container finished" podID="3332dfd1-239f-40e8-9ffa-b2dfb4c6422e" containerID="f02b169bc6d4f81a6e1fcbcc36e6fbc71482f39df6e66ff1593f2aeefb59c4f7" exitCode=0 Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.878390 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnhtv" event={"ID":"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e","Type":"ContainerDied","Data":"f02b169bc6d4f81a6e1fcbcc36e6fbc71482f39df6e66ff1593f2aeefb59c4f7"} Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.878415 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnhtv" event={"ID":"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e","Type":"ContainerDied","Data":"e91d24bafb4ba23512441976977d4c1e9d4f0c5bb10c4601f8ae397a869e1aac"} Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.878431 4842 scope.go:117] "RemoveContainer" containerID="f02b169bc6d4f81a6e1fcbcc36e6fbc71482f39df6e66ff1593f2aeefb59c4f7" Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.878451 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.904888 4842 scope.go:117] "RemoveContainer" containerID="f681a043c6757347836bffe4624a97dab090d16c617ca0e5e56a88bb027ade1e" Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.915461 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jnhtv"] Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.942845 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jnhtv"] Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.945386 4842 scope.go:117] "RemoveContainer" containerID="5eef8870234cc24f518c7d89449ab0f23cf589da5ca2e179ea111d23b60faed7" Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.962625 4842 scope.go:117] "RemoveContainer" containerID="f02b169bc6d4f81a6e1fcbcc36e6fbc71482f39df6e66ff1593f2aeefb59c4f7" Feb 02 08:13:10 crc kubenswrapper[4842]: E0202 08:13:10.963075 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f02b169bc6d4f81a6e1fcbcc36e6fbc71482f39df6e66ff1593f2aeefb59c4f7\": container with ID starting with f02b169bc6d4f81a6e1fcbcc36e6fbc71482f39df6e66ff1593f2aeefb59c4f7 not found: ID does not exist" containerID="f02b169bc6d4f81a6e1fcbcc36e6fbc71482f39df6e66ff1593f2aeefb59c4f7" Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.963118 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f02b169bc6d4f81a6e1fcbcc36e6fbc71482f39df6e66ff1593f2aeefb59c4f7"} err="failed to get container status \"f02b169bc6d4f81a6e1fcbcc36e6fbc71482f39df6e66ff1593f2aeefb59c4f7\": rpc error: code = NotFound desc = could not find container \"f02b169bc6d4f81a6e1fcbcc36e6fbc71482f39df6e66ff1593f2aeefb59c4f7\": container with ID starting with f02b169bc6d4f81a6e1fcbcc36e6fbc71482f39df6e66ff1593f2aeefb59c4f7 not found: ID does not exist" Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.963145 4842 scope.go:117] "RemoveContainer" containerID="f681a043c6757347836bffe4624a97dab090d16c617ca0e5e56a88bb027ade1e" Feb 02 08:13:10 crc kubenswrapper[4842]: E0202 08:13:10.963618 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f681a043c6757347836bffe4624a97dab090d16c617ca0e5e56a88bb027ade1e\": container with ID starting with f681a043c6757347836bffe4624a97dab090d16c617ca0e5e56a88bb027ade1e not found: ID does not exist" containerID="f681a043c6757347836bffe4624a97dab090d16c617ca0e5e56a88bb027ade1e" Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.963670 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f681a043c6757347836bffe4624a97dab090d16c617ca0e5e56a88bb027ade1e"} err="failed to get container status \"f681a043c6757347836bffe4624a97dab090d16c617ca0e5e56a88bb027ade1e\": rpc error: code = NotFound desc = could not find container \"f681a043c6757347836bffe4624a97dab090d16c617ca0e5e56a88bb027ade1e\": container with ID starting with f681a043c6757347836bffe4624a97dab090d16c617ca0e5e56a88bb027ade1e not found: ID does not exist" Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.963702 4842 scope.go:117] "RemoveContainer" containerID="5eef8870234cc24f518c7d89449ab0f23cf589da5ca2e179ea111d23b60faed7" Feb 02 08:13:10 crc kubenswrapper[4842]: E0202 08:13:10.964015 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5eef8870234cc24f518c7d89449ab0f23cf589da5ca2e179ea111d23b60faed7\": container with ID starting with 5eef8870234cc24f518c7d89449ab0f23cf589da5ca2e179ea111d23b60faed7 not found: ID does not exist" containerID="5eef8870234cc24f518c7d89449ab0f23cf589da5ca2e179ea111d23b60faed7" Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.964040 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eef8870234cc24f518c7d89449ab0f23cf589da5ca2e179ea111d23b60faed7"} err="failed to get container status \"5eef8870234cc24f518c7d89449ab0f23cf589da5ca2e179ea111d23b60faed7\": rpc error: code = NotFound desc = could not find container \"5eef8870234cc24f518c7d89449ab0f23cf589da5ca2e179ea111d23b60faed7\": container with ID starting with 5eef8870234cc24f518c7d89449ab0f23cf589da5ca2e179ea111d23b60faed7 not found: ID does not exist" Feb 02 08:13:11 crc kubenswrapper[4842]: I0202 08:13:11.452347 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3332dfd1-239f-40e8-9ffa-b2dfb4c6422e" path="/var/lib/kubelet/pods/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e/volumes" Feb 02 08:13:12 crc kubenswrapper[4842]: I0202 08:13:12.434101 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:13:12 crc kubenswrapper[4842]: I0202 08:13:12.895115 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"6352945da641e26d3a6dce83e21b103005cf80f344e8fe0d66b6a98e2b650f92"} Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.164867 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4"] Feb 02 08:15:00 crc kubenswrapper[4842]: E0202 08:15:00.165966 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3332dfd1-239f-40e8-9ffa-b2dfb4c6422e" containerName="extract-utilities" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.165990 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3332dfd1-239f-40e8-9ffa-b2dfb4c6422e" containerName="extract-utilities" Feb 02 08:15:00 crc kubenswrapper[4842]: E0202 08:15:00.166026 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3332dfd1-239f-40e8-9ffa-b2dfb4c6422e" containerName="extract-content" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.166043 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3332dfd1-239f-40e8-9ffa-b2dfb4c6422e" containerName="extract-content" Feb 02 08:15:00 crc kubenswrapper[4842]: E0202 08:15:00.166080 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3332dfd1-239f-40e8-9ffa-b2dfb4c6422e" containerName="registry-server" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.166097 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3332dfd1-239f-40e8-9ffa-b2dfb4c6422e" containerName="registry-server" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.166422 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3332dfd1-239f-40e8-9ffa-b2dfb4c6422e" containerName="registry-server" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.167155 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.171405 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.171429 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.178909 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4"] Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.331564 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66qfm\" (UniqueName: \"kubernetes.io/projected/23e7ebd9-93a3-45db-8cff-07ae373b0879-kube-api-access-66qfm\") pod \"collect-profiles-29500335-ms6c4\" (UID: \"23e7ebd9-93a3-45db-8cff-07ae373b0879\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.331671 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23e7ebd9-93a3-45db-8cff-07ae373b0879-config-volume\") pod \"collect-profiles-29500335-ms6c4\" (UID: \"23e7ebd9-93a3-45db-8cff-07ae373b0879\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.331726 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/23e7ebd9-93a3-45db-8cff-07ae373b0879-secret-volume\") pod \"collect-profiles-29500335-ms6c4\" (UID: \"23e7ebd9-93a3-45db-8cff-07ae373b0879\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.433606 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66qfm\" (UniqueName: \"kubernetes.io/projected/23e7ebd9-93a3-45db-8cff-07ae373b0879-kube-api-access-66qfm\") pod \"collect-profiles-29500335-ms6c4\" (UID: \"23e7ebd9-93a3-45db-8cff-07ae373b0879\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.433717 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23e7ebd9-93a3-45db-8cff-07ae373b0879-config-volume\") pod \"collect-profiles-29500335-ms6c4\" (UID: \"23e7ebd9-93a3-45db-8cff-07ae373b0879\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.433788 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/23e7ebd9-93a3-45db-8cff-07ae373b0879-secret-volume\") pod \"collect-profiles-29500335-ms6c4\" (UID: \"23e7ebd9-93a3-45db-8cff-07ae373b0879\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.434971 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23e7ebd9-93a3-45db-8cff-07ae373b0879-config-volume\") pod \"collect-profiles-29500335-ms6c4\" (UID: \"23e7ebd9-93a3-45db-8cff-07ae373b0879\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.444867 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/23e7ebd9-93a3-45db-8cff-07ae373b0879-secret-volume\") pod \"collect-profiles-29500335-ms6c4\" (UID: \"23e7ebd9-93a3-45db-8cff-07ae373b0879\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.471949 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66qfm\" (UniqueName: \"kubernetes.io/projected/23e7ebd9-93a3-45db-8cff-07ae373b0879-kube-api-access-66qfm\") pod \"collect-profiles-29500335-ms6c4\" (UID: \"23e7ebd9-93a3-45db-8cff-07ae373b0879\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.493461 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.983237 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4"] Feb 02 08:15:01 crc kubenswrapper[4842]: I0202 08:15:01.936010 4842 generic.go:334] "Generic (PLEG): container finished" podID="23e7ebd9-93a3-45db-8cff-07ae373b0879" containerID="745375a6b9c71aa829798c0d75b999195e39d9afe60926fdd96735a190433847" exitCode=0 Feb 02 08:15:01 crc kubenswrapper[4842]: I0202 08:15:01.936070 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4" event={"ID":"23e7ebd9-93a3-45db-8cff-07ae373b0879","Type":"ContainerDied","Data":"745375a6b9c71aa829798c0d75b999195e39d9afe60926fdd96735a190433847"} Feb 02 08:15:01 crc kubenswrapper[4842]: I0202 08:15:01.936344 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4" event={"ID":"23e7ebd9-93a3-45db-8cff-07ae373b0879","Type":"ContainerStarted","Data":"cb919aeb619658ee7321a50a61278ab5a442098a4e9675114f6becb8ddc68c30"} Feb 02 08:15:03 crc kubenswrapper[4842]: I0202 08:15:03.293174 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4" Feb 02 08:15:03 crc kubenswrapper[4842]: I0202 08:15:03.390888 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23e7ebd9-93a3-45db-8cff-07ae373b0879-config-volume\") pod \"23e7ebd9-93a3-45db-8cff-07ae373b0879\" (UID: \"23e7ebd9-93a3-45db-8cff-07ae373b0879\") " Feb 02 08:15:03 crc kubenswrapper[4842]: I0202 08:15:03.390958 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66qfm\" (UniqueName: \"kubernetes.io/projected/23e7ebd9-93a3-45db-8cff-07ae373b0879-kube-api-access-66qfm\") pod \"23e7ebd9-93a3-45db-8cff-07ae373b0879\" (UID: \"23e7ebd9-93a3-45db-8cff-07ae373b0879\") " Feb 02 08:15:03 crc kubenswrapper[4842]: I0202 08:15:03.391150 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/23e7ebd9-93a3-45db-8cff-07ae373b0879-secret-volume\") pod \"23e7ebd9-93a3-45db-8cff-07ae373b0879\" (UID: \"23e7ebd9-93a3-45db-8cff-07ae373b0879\") " Feb 02 08:15:03 crc kubenswrapper[4842]: I0202 08:15:03.391959 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23e7ebd9-93a3-45db-8cff-07ae373b0879-config-volume" (OuterVolumeSpecName: "config-volume") pod "23e7ebd9-93a3-45db-8cff-07ae373b0879" (UID: "23e7ebd9-93a3-45db-8cff-07ae373b0879"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:15:03 crc kubenswrapper[4842]: I0202 08:15:03.399803 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23e7ebd9-93a3-45db-8cff-07ae373b0879-kube-api-access-66qfm" (OuterVolumeSpecName: "kube-api-access-66qfm") pod "23e7ebd9-93a3-45db-8cff-07ae373b0879" (UID: "23e7ebd9-93a3-45db-8cff-07ae373b0879"). InnerVolumeSpecName "kube-api-access-66qfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:15:03 crc kubenswrapper[4842]: I0202 08:15:03.403564 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23e7ebd9-93a3-45db-8cff-07ae373b0879-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "23e7ebd9-93a3-45db-8cff-07ae373b0879" (UID: "23e7ebd9-93a3-45db-8cff-07ae373b0879"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:15:03 crc kubenswrapper[4842]: I0202 08:15:03.493171 4842 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/23e7ebd9-93a3-45db-8cff-07ae373b0879-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 08:15:03 crc kubenswrapper[4842]: I0202 08:15:03.493203 4842 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23e7ebd9-93a3-45db-8cff-07ae373b0879-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 08:15:03 crc kubenswrapper[4842]: I0202 08:15:03.493237 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66qfm\" (UniqueName: \"kubernetes.io/projected/23e7ebd9-93a3-45db-8cff-07ae373b0879-kube-api-access-66qfm\") on node \"crc\" DevicePath \"\"" Feb 02 08:15:03 crc kubenswrapper[4842]: I0202 08:15:03.955443 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4" event={"ID":"23e7ebd9-93a3-45db-8cff-07ae373b0879","Type":"ContainerDied","Data":"cb919aeb619658ee7321a50a61278ab5a442098a4e9675114f6becb8ddc68c30"} Feb 02 08:15:03 crc kubenswrapper[4842]: I0202 08:15:03.955498 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb919aeb619658ee7321a50a61278ab5a442098a4e9675114f6becb8ddc68c30" Feb 02 08:15:03 crc kubenswrapper[4842]: I0202 08:15:03.955601 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4" Feb 02 08:15:04 crc kubenswrapper[4842]: I0202 08:15:04.386860 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7"] Feb 02 08:15:04 crc kubenswrapper[4842]: I0202 08:15:04.393571 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7"] Feb 02 08:15:05 crc kubenswrapper[4842]: I0202 08:15:05.451915 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe" path="/var/lib/kubelet/pods/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe/volumes" Feb 02 08:15:12 crc kubenswrapper[4842]: I0202 08:15:12.145826 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:15:12 crc kubenswrapper[4842]: I0202 08:15:12.146168 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:15:22 crc kubenswrapper[4842]: I0202 08:15:22.819025 4842 scope.go:117] "RemoveContainer" containerID="8dbf1ff40ae24c1cb278330205be0fe8707c50279bf4f5b00c195cfdd226a43f" Feb 02 08:15:42 crc kubenswrapper[4842]: I0202 08:15:42.146673 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:15:42 crc kubenswrapper[4842]: I0202 08:15:42.147449 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:16:12 crc kubenswrapper[4842]: I0202 08:16:12.146261 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:16:12 crc kubenswrapper[4842]: I0202 08:16:12.147210 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:16:12 crc kubenswrapper[4842]: I0202 08:16:12.147322 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 08:16:12 crc kubenswrapper[4842]: I0202 08:16:12.148150 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6352945da641e26d3a6dce83e21b103005cf80f344e8fe0d66b6a98e2b650f92"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 08:16:12 crc kubenswrapper[4842]: I0202 08:16:12.148285 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://6352945da641e26d3a6dce83e21b103005cf80f344e8fe0d66b6a98e2b650f92" gracePeriod=600 Feb 02 08:16:12 crc kubenswrapper[4842]: I0202 08:16:12.594661 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="6352945da641e26d3a6dce83e21b103005cf80f344e8fe0d66b6a98e2b650f92" exitCode=0 Feb 02 08:16:12 crc kubenswrapper[4842]: I0202 08:16:12.594725 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"6352945da641e26d3a6dce83e21b103005cf80f344e8fe0d66b6a98e2b650f92"} Feb 02 08:16:12 crc kubenswrapper[4842]: I0202 08:16:12.595139 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82"} Feb 02 08:16:12 crc kubenswrapper[4842]: I0202 08:16:12.595179 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:16:58 crc kubenswrapper[4842]: I0202 08:16:58.047975 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9h4qp"] Feb 02 08:16:58 crc kubenswrapper[4842]: E0202 08:16:58.048902 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23e7ebd9-93a3-45db-8cff-07ae373b0879" containerName="collect-profiles" Feb 02 08:16:58 crc kubenswrapper[4842]: I0202 08:16:58.048919 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="23e7ebd9-93a3-45db-8cff-07ae373b0879" containerName="collect-profiles" Feb 02 08:16:58 crc kubenswrapper[4842]: I0202 08:16:58.049109 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="23e7ebd9-93a3-45db-8cff-07ae373b0879" containerName="collect-profiles" Feb 02 08:16:58 crc kubenswrapper[4842]: I0202 08:16:58.050362 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:16:58 crc kubenswrapper[4842]: I0202 08:16:58.065817 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9h4qp"] Feb 02 08:16:58 crc kubenswrapper[4842]: I0202 08:16:58.177562 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7082cb1f-29f3-4652-9b74-94e76fb391ed-utilities\") pod \"community-operators-9h4qp\" (UID: \"7082cb1f-29f3-4652-9b74-94e76fb391ed\") " pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:16:58 crc kubenswrapper[4842]: I0202 08:16:58.177688 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsmzt\" (UniqueName: \"kubernetes.io/projected/7082cb1f-29f3-4652-9b74-94e76fb391ed-kube-api-access-xsmzt\") pod \"community-operators-9h4qp\" (UID: \"7082cb1f-29f3-4652-9b74-94e76fb391ed\") " pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:16:58 crc kubenswrapper[4842]: I0202 08:16:58.177731 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7082cb1f-29f3-4652-9b74-94e76fb391ed-catalog-content\") pod \"community-operators-9h4qp\" (UID: \"7082cb1f-29f3-4652-9b74-94e76fb391ed\") " pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:16:58 crc kubenswrapper[4842]: I0202 08:16:58.279334 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7082cb1f-29f3-4652-9b74-94e76fb391ed-utilities\") pod \"community-operators-9h4qp\" (UID: \"7082cb1f-29f3-4652-9b74-94e76fb391ed\") " pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:16:58 crc kubenswrapper[4842]: I0202 08:16:58.279434 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsmzt\" (UniqueName: \"kubernetes.io/projected/7082cb1f-29f3-4652-9b74-94e76fb391ed-kube-api-access-xsmzt\") pod \"community-operators-9h4qp\" (UID: \"7082cb1f-29f3-4652-9b74-94e76fb391ed\") " pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:16:58 crc kubenswrapper[4842]: I0202 08:16:58.279478 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7082cb1f-29f3-4652-9b74-94e76fb391ed-catalog-content\") pod \"community-operators-9h4qp\" (UID: \"7082cb1f-29f3-4652-9b74-94e76fb391ed\") " pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:16:58 crc kubenswrapper[4842]: I0202 08:16:58.280442 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7082cb1f-29f3-4652-9b74-94e76fb391ed-catalog-content\") pod \"community-operators-9h4qp\" (UID: \"7082cb1f-29f3-4652-9b74-94e76fb391ed\") " pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:16:58 crc kubenswrapper[4842]: I0202 08:16:58.280533 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7082cb1f-29f3-4652-9b74-94e76fb391ed-utilities\") pod \"community-operators-9h4qp\" (UID: \"7082cb1f-29f3-4652-9b74-94e76fb391ed\") " pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:16:58 crc kubenswrapper[4842]: I0202 08:16:58.298548 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsmzt\" (UniqueName: \"kubernetes.io/projected/7082cb1f-29f3-4652-9b74-94e76fb391ed-kube-api-access-xsmzt\") pod \"community-operators-9h4qp\" (UID: \"7082cb1f-29f3-4652-9b74-94e76fb391ed\") " pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:16:58 crc kubenswrapper[4842]: I0202 08:16:58.372784 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:16:58 crc kubenswrapper[4842]: I0202 08:16:58.630107 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9h4qp"] Feb 02 08:16:59 crc kubenswrapper[4842]: I0202 08:16:59.165096 4842 generic.go:334] "Generic (PLEG): container finished" podID="7082cb1f-29f3-4652-9b74-94e76fb391ed" containerID="a976f6858cdbf3f1c590eacc5f8c9daad9c067c347932c3d2396018154650e38" exitCode=0 Feb 02 08:16:59 crc kubenswrapper[4842]: I0202 08:16:59.165195 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9h4qp" event={"ID":"7082cb1f-29f3-4652-9b74-94e76fb391ed","Type":"ContainerDied","Data":"a976f6858cdbf3f1c590eacc5f8c9daad9c067c347932c3d2396018154650e38"} Feb 02 08:16:59 crc kubenswrapper[4842]: I0202 08:16:59.165600 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9h4qp" event={"ID":"7082cb1f-29f3-4652-9b74-94e76fb391ed","Type":"ContainerStarted","Data":"f5e8a7529c8fe9e2eab927e7f8e49d3c717cd786f8001ac00969587b6bc359fb"} Feb 02 08:16:59 crc kubenswrapper[4842]: I0202 08:16:59.167390 4842 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 08:17:00 crc kubenswrapper[4842]: I0202 08:17:00.178632 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9h4qp" event={"ID":"7082cb1f-29f3-4652-9b74-94e76fb391ed","Type":"ContainerStarted","Data":"7089a25637172a448410ccbe3b6a44a9c800b4a619e3a73ae36b95d769e1307e"} Feb 02 08:17:01 crc kubenswrapper[4842]: I0202 08:17:01.191329 4842 generic.go:334] "Generic (PLEG): container finished" podID="7082cb1f-29f3-4652-9b74-94e76fb391ed" containerID="7089a25637172a448410ccbe3b6a44a9c800b4a619e3a73ae36b95d769e1307e" exitCode=0 Feb 02 08:17:01 crc kubenswrapper[4842]: I0202 08:17:01.191415 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9h4qp" event={"ID":"7082cb1f-29f3-4652-9b74-94e76fb391ed","Type":"ContainerDied","Data":"7089a25637172a448410ccbe3b6a44a9c800b4a619e3a73ae36b95d769e1307e"} Feb 02 08:17:03 crc kubenswrapper[4842]: I0202 08:17:03.219760 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9h4qp" event={"ID":"7082cb1f-29f3-4652-9b74-94e76fb391ed","Type":"ContainerStarted","Data":"5cf857914d155327387a42d7bdec87defde07abc1802ce9c909d1fbc7aa2d5d4"} Feb 02 08:17:03 crc kubenswrapper[4842]: I0202 08:17:03.252298 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9h4qp" podStartSLOduration=2.582693617 podStartE2EDuration="5.252272985s" podCreationTimestamp="2026-02-02 08:16:58 +0000 UTC" firstStartedPulling="2026-02-02 08:16:59.166987694 +0000 UTC m=+5444.544255636" lastFinishedPulling="2026-02-02 08:17:01.836567062 +0000 UTC m=+5447.213835004" observedRunningTime="2026-02-02 08:17:03.247588999 +0000 UTC m=+5448.624856921" watchObservedRunningTime="2026-02-02 08:17:03.252272985 +0000 UTC m=+5448.629540907" Feb 02 08:17:08 crc kubenswrapper[4842]: I0202 08:17:08.372815 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:17:08 crc kubenswrapper[4842]: I0202 08:17:08.373359 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:17:08 crc kubenswrapper[4842]: I0202 08:17:08.423710 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:17:09 crc kubenswrapper[4842]: I0202 08:17:09.414192 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:17:09 crc kubenswrapper[4842]: I0202 08:17:09.509459 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9h4qp"] Feb 02 08:17:11 crc kubenswrapper[4842]: I0202 08:17:11.289880 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9h4qp" podUID="7082cb1f-29f3-4652-9b74-94e76fb391ed" containerName="registry-server" containerID="cri-o://5cf857914d155327387a42d7bdec87defde07abc1802ce9c909d1fbc7aa2d5d4" gracePeriod=2 Feb 02 08:17:11 crc kubenswrapper[4842]: I0202 08:17:11.745574 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:17:11 crc kubenswrapper[4842]: I0202 08:17:11.895767 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7082cb1f-29f3-4652-9b74-94e76fb391ed-utilities\") pod \"7082cb1f-29f3-4652-9b74-94e76fb391ed\" (UID: \"7082cb1f-29f3-4652-9b74-94e76fb391ed\") " Feb 02 08:17:11 crc kubenswrapper[4842]: I0202 08:17:11.895870 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsmzt\" (UniqueName: \"kubernetes.io/projected/7082cb1f-29f3-4652-9b74-94e76fb391ed-kube-api-access-xsmzt\") pod \"7082cb1f-29f3-4652-9b74-94e76fb391ed\" (UID: \"7082cb1f-29f3-4652-9b74-94e76fb391ed\") " Feb 02 08:17:11 crc kubenswrapper[4842]: I0202 08:17:11.895899 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7082cb1f-29f3-4652-9b74-94e76fb391ed-catalog-content\") pod \"7082cb1f-29f3-4652-9b74-94e76fb391ed\" (UID: \"7082cb1f-29f3-4652-9b74-94e76fb391ed\") " Feb 02 08:17:11 crc kubenswrapper[4842]: I0202 08:17:11.897086 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7082cb1f-29f3-4652-9b74-94e76fb391ed-utilities" (OuterVolumeSpecName: "utilities") pod "7082cb1f-29f3-4652-9b74-94e76fb391ed" (UID: "7082cb1f-29f3-4652-9b74-94e76fb391ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:17:11 crc kubenswrapper[4842]: I0202 08:17:11.915167 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7082cb1f-29f3-4652-9b74-94e76fb391ed-kube-api-access-xsmzt" (OuterVolumeSpecName: "kube-api-access-xsmzt") pod "7082cb1f-29f3-4652-9b74-94e76fb391ed" (UID: "7082cb1f-29f3-4652-9b74-94e76fb391ed"). InnerVolumeSpecName "kube-api-access-xsmzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:17:11 crc kubenswrapper[4842]: I0202 08:17:11.955587 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7082cb1f-29f3-4652-9b74-94e76fb391ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7082cb1f-29f3-4652-9b74-94e76fb391ed" (UID: "7082cb1f-29f3-4652-9b74-94e76fb391ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:17:11 crc kubenswrapper[4842]: I0202 08:17:11.997066 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsmzt\" (UniqueName: \"kubernetes.io/projected/7082cb1f-29f3-4652-9b74-94e76fb391ed-kube-api-access-xsmzt\") on node \"crc\" DevicePath \"\"" Feb 02 08:17:11 crc kubenswrapper[4842]: I0202 08:17:11.997098 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7082cb1f-29f3-4652-9b74-94e76fb391ed-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 08:17:11 crc kubenswrapper[4842]: I0202 08:17:11.997112 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7082cb1f-29f3-4652-9b74-94e76fb391ed-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 08:17:12 crc kubenswrapper[4842]: I0202 08:17:12.304494 4842 generic.go:334] "Generic (PLEG): container finished" podID="7082cb1f-29f3-4652-9b74-94e76fb391ed" containerID="5cf857914d155327387a42d7bdec87defde07abc1802ce9c909d1fbc7aa2d5d4" exitCode=0 Feb 02 08:17:12 crc kubenswrapper[4842]: I0202 08:17:12.304580 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9h4qp" event={"ID":"7082cb1f-29f3-4652-9b74-94e76fb391ed","Type":"ContainerDied","Data":"5cf857914d155327387a42d7bdec87defde07abc1802ce9c909d1fbc7aa2d5d4"} Feb 02 08:17:12 crc kubenswrapper[4842]: I0202 08:17:12.304661 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9h4qp" event={"ID":"7082cb1f-29f3-4652-9b74-94e76fb391ed","Type":"ContainerDied","Data":"f5e8a7529c8fe9e2eab927e7f8e49d3c717cd786f8001ac00969587b6bc359fb"} Feb 02 08:17:12 crc kubenswrapper[4842]: I0202 08:17:12.304665 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:17:12 crc kubenswrapper[4842]: I0202 08:17:12.304709 4842 scope.go:117] "RemoveContainer" containerID="5cf857914d155327387a42d7bdec87defde07abc1802ce9c909d1fbc7aa2d5d4" Feb 02 08:17:12 crc kubenswrapper[4842]: I0202 08:17:12.348364 4842 scope.go:117] "RemoveContainer" containerID="7089a25637172a448410ccbe3b6a44a9c800b4a619e3a73ae36b95d769e1307e" Feb 02 08:17:12 crc kubenswrapper[4842]: I0202 08:17:12.371672 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9h4qp"] Feb 02 08:17:12 crc kubenswrapper[4842]: I0202 08:17:12.378837 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9h4qp"] Feb 02 08:17:12 crc kubenswrapper[4842]: I0202 08:17:12.395838 4842 scope.go:117] "RemoveContainer" containerID="a976f6858cdbf3f1c590eacc5f8c9daad9c067c347932c3d2396018154650e38" Feb 02 08:17:12 crc kubenswrapper[4842]: I0202 08:17:12.426082 4842 scope.go:117] "RemoveContainer" containerID="5cf857914d155327387a42d7bdec87defde07abc1802ce9c909d1fbc7aa2d5d4" Feb 02 08:17:12 crc kubenswrapper[4842]: E0202 08:17:12.426738 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cf857914d155327387a42d7bdec87defde07abc1802ce9c909d1fbc7aa2d5d4\": container with ID starting with 5cf857914d155327387a42d7bdec87defde07abc1802ce9c909d1fbc7aa2d5d4 not found: ID does not exist" containerID="5cf857914d155327387a42d7bdec87defde07abc1802ce9c909d1fbc7aa2d5d4" Feb 02 08:17:12 crc kubenswrapper[4842]: I0202 08:17:12.426803 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cf857914d155327387a42d7bdec87defde07abc1802ce9c909d1fbc7aa2d5d4"} err="failed to get container status \"5cf857914d155327387a42d7bdec87defde07abc1802ce9c909d1fbc7aa2d5d4\": rpc error: code = NotFound desc = could not find container \"5cf857914d155327387a42d7bdec87defde07abc1802ce9c909d1fbc7aa2d5d4\": container with ID starting with 5cf857914d155327387a42d7bdec87defde07abc1802ce9c909d1fbc7aa2d5d4 not found: ID does not exist" Feb 02 08:17:12 crc kubenswrapper[4842]: I0202 08:17:12.426838 4842 scope.go:117] "RemoveContainer" containerID="7089a25637172a448410ccbe3b6a44a9c800b4a619e3a73ae36b95d769e1307e" Feb 02 08:17:12 crc kubenswrapper[4842]: E0202 08:17:12.427484 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7089a25637172a448410ccbe3b6a44a9c800b4a619e3a73ae36b95d769e1307e\": container with ID starting with 7089a25637172a448410ccbe3b6a44a9c800b4a619e3a73ae36b95d769e1307e not found: ID does not exist" containerID="7089a25637172a448410ccbe3b6a44a9c800b4a619e3a73ae36b95d769e1307e" Feb 02 08:17:12 crc kubenswrapper[4842]: I0202 08:17:12.427524 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7089a25637172a448410ccbe3b6a44a9c800b4a619e3a73ae36b95d769e1307e"} err="failed to get container status \"7089a25637172a448410ccbe3b6a44a9c800b4a619e3a73ae36b95d769e1307e\": rpc error: code = NotFound desc = could not find container \"7089a25637172a448410ccbe3b6a44a9c800b4a619e3a73ae36b95d769e1307e\": container with ID starting with 7089a25637172a448410ccbe3b6a44a9c800b4a619e3a73ae36b95d769e1307e not found: ID does not exist" Feb 02 08:17:12 crc kubenswrapper[4842]: I0202 08:17:12.427549 4842 scope.go:117] "RemoveContainer" containerID="a976f6858cdbf3f1c590eacc5f8c9daad9c067c347932c3d2396018154650e38" Feb 02 08:17:12 crc kubenswrapper[4842]: E0202 08:17:12.428332 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a976f6858cdbf3f1c590eacc5f8c9daad9c067c347932c3d2396018154650e38\": container with ID starting with a976f6858cdbf3f1c590eacc5f8c9daad9c067c347932c3d2396018154650e38 not found: ID does not exist" containerID="a976f6858cdbf3f1c590eacc5f8c9daad9c067c347932c3d2396018154650e38" Feb 02 08:17:12 crc kubenswrapper[4842]: I0202 08:17:12.428378 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a976f6858cdbf3f1c590eacc5f8c9daad9c067c347932c3d2396018154650e38"} err="failed to get container status \"a976f6858cdbf3f1c590eacc5f8c9daad9c067c347932c3d2396018154650e38\": rpc error: code = NotFound desc = could not find container \"a976f6858cdbf3f1c590eacc5f8c9daad9c067c347932c3d2396018154650e38\": container with ID starting with a976f6858cdbf3f1c590eacc5f8c9daad9c067c347932c3d2396018154650e38 not found: ID does not exist" Feb 02 08:17:13 crc kubenswrapper[4842]: I0202 08:17:13.452462 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7082cb1f-29f3-4652-9b74-94e76fb391ed" path="/var/lib/kubelet/pods/7082cb1f-29f3-4652-9b74-94e76fb391ed/volumes" Feb 02 08:18:12 crc kubenswrapper[4842]: I0202 08:18:12.146014 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:18:12 crc kubenswrapper[4842]: I0202 08:18:12.146690 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:18:42 crc kubenswrapper[4842]: I0202 08:18:42.146000 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:18:42 crc kubenswrapper[4842]: I0202 08:18:42.146656 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:19:12 crc kubenswrapper[4842]: I0202 08:19:12.145653 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:19:12 crc kubenswrapper[4842]: I0202 08:19:12.146164 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:19:12 crc kubenswrapper[4842]: I0202 08:19:12.146209 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 08:19:12 crc kubenswrapper[4842]: I0202 08:19:12.146844 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 08:19:12 crc kubenswrapper[4842]: I0202 08:19:12.146910 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" gracePeriod=600 Feb 02 08:19:12 crc kubenswrapper[4842]: E0202 08:19:12.286707 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:19:12 crc kubenswrapper[4842]: I0202 08:19:12.732418 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" exitCode=0 Feb 02 08:19:12 crc kubenswrapper[4842]: I0202 08:19:12.732483 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82"} Feb 02 08:19:12 crc kubenswrapper[4842]: I0202 08:19:12.732532 4842 scope.go:117] "RemoveContainer" containerID="6352945da641e26d3a6dce83e21b103005cf80f344e8fe0d66b6a98e2b650f92" Feb 02 08:19:12 crc kubenswrapper[4842]: I0202 08:19:12.733453 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:19:12 crc kubenswrapper[4842]: E0202 08:19:12.733975 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:19:27 crc kubenswrapper[4842]: I0202 08:19:27.433693 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:19:27 crc kubenswrapper[4842]: E0202 08:19:27.434807 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:19:31 crc kubenswrapper[4842]: I0202 08:19:31.827643 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l4pbx"] Feb 02 08:19:31 crc kubenswrapper[4842]: E0202 08:19:31.829771 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7082cb1f-29f3-4652-9b74-94e76fb391ed" containerName="extract-utilities" Feb 02 08:19:31 crc kubenswrapper[4842]: I0202 08:19:31.829810 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="7082cb1f-29f3-4652-9b74-94e76fb391ed" containerName="extract-utilities" Feb 02 08:19:31 crc kubenswrapper[4842]: E0202 08:19:31.829864 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7082cb1f-29f3-4652-9b74-94e76fb391ed" containerName="extract-content" Feb 02 08:19:31 crc kubenswrapper[4842]: I0202 08:19:31.829883 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="7082cb1f-29f3-4652-9b74-94e76fb391ed" containerName="extract-content" Feb 02 08:19:31 crc kubenswrapper[4842]: E0202 08:19:31.829934 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7082cb1f-29f3-4652-9b74-94e76fb391ed" containerName="registry-server" Feb 02 08:19:31 crc kubenswrapper[4842]: I0202 08:19:31.829955 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="7082cb1f-29f3-4652-9b74-94e76fb391ed" containerName="registry-server" Feb 02 08:19:31 crc kubenswrapper[4842]: I0202 08:19:31.830347 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="7082cb1f-29f3-4652-9b74-94e76fb391ed" containerName="registry-server" Feb 02 08:19:31 crc kubenswrapper[4842]: I0202 08:19:31.832647 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:31 crc kubenswrapper[4842]: I0202 08:19:31.845251 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l4pbx"] Feb 02 08:19:31 crc kubenswrapper[4842]: I0202 08:19:31.885145 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-catalog-content\") pod \"redhat-operators-l4pbx\" (UID: \"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7\") " pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:31 crc kubenswrapper[4842]: I0202 08:19:31.885538 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlqhr\" (UniqueName: \"kubernetes.io/projected/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-kube-api-access-mlqhr\") pod \"redhat-operators-l4pbx\" (UID: \"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7\") " pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:31 crc kubenswrapper[4842]: I0202 08:19:31.885746 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-utilities\") pod \"redhat-operators-l4pbx\" (UID: \"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7\") " pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:31 crc kubenswrapper[4842]: I0202 08:19:31.986414 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-catalog-content\") pod \"redhat-operators-l4pbx\" (UID: \"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7\") " pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:31 crc kubenswrapper[4842]: I0202 08:19:31.986483 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlqhr\" (UniqueName: \"kubernetes.io/projected/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-kube-api-access-mlqhr\") pod \"redhat-operators-l4pbx\" (UID: \"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7\") " pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:31 crc kubenswrapper[4842]: I0202 08:19:31.986523 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-utilities\") pod \"redhat-operators-l4pbx\" (UID: \"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7\") " pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:31 crc kubenswrapper[4842]: I0202 08:19:31.986977 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-catalog-content\") pod \"redhat-operators-l4pbx\" (UID: \"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7\") " pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:31 crc kubenswrapper[4842]: I0202 08:19:31.987078 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-utilities\") pod \"redhat-operators-l4pbx\" (UID: \"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7\") " pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:32 crc kubenswrapper[4842]: I0202 08:19:32.017671 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlqhr\" (UniqueName: \"kubernetes.io/projected/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-kube-api-access-mlqhr\") pod \"redhat-operators-l4pbx\" (UID: \"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7\") " pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:32 crc kubenswrapper[4842]: I0202 08:19:32.200797 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:32 crc kubenswrapper[4842]: I0202 08:19:32.651853 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l4pbx"] Feb 02 08:19:32 crc kubenswrapper[4842]: I0202 08:19:32.907653 4842 generic.go:334] "Generic (PLEG): container finished" podID="62b266ca-ea4c-4fb2-a376-fb7ce2d341d7" containerID="9fbf6342f5aed563c2199345c3620453bcac644d49eb43961cc23dbb44e58370" exitCode=0 Feb 02 08:19:32 crc kubenswrapper[4842]: I0202 08:19:32.907765 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4pbx" event={"ID":"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7","Type":"ContainerDied","Data":"9fbf6342f5aed563c2199345c3620453bcac644d49eb43961cc23dbb44e58370"} Feb 02 08:19:32 crc kubenswrapper[4842]: I0202 08:19:32.907971 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4pbx" event={"ID":"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7","Type":"ContainerStarted","Data":"17d09c4717c193ff0f39559deea84c2b67b4b56124ea4abb01b5757dc66fa47f"} Feb 02 08:19:33 crc kubenswrapper[4842]: I0202 08:19:33.920286 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4pbx" event={"ID":"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7","Type":"ContainerStarted","Data":"056481dbcf76796892ba9d4b9d75af8fd5c86c57785da9a384f1e2725e7ebc7f"} Feb 02 08:19:34 crc kubenswrapper[4842]: I0202 08:19:34.931448 4842 generic.go:334] "Generic (PLEG): container finished" podID="62b266ca-ea4c-4fb2-a376-fb7ce2d341d7" containerID="056481dbcf76796892ba9d4b9d75af8fd5c86c57785da9a384f1e2725e7ebc7f" exitCode=0 Feb 02 08:19:34 crc kubenswrapper[4842]: I0202 08:19:34.931558 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4pbx" event={"ID":"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7","Type":"ContainerDied","Data":"056481dbcf76796892ba9d4b9d75af8fd5c86c57785da9a384f1e2725e7ebc7f"} Feb 02 08:19:35 crc kubenswrapper[4842]: I0202 08:19:35.945678 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4pbx" event={"ID":"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7","Type":"ContainerStarted","Data":"d0179fb461459dca8885f9993122c8d8bab088f71fdeafeadf0dc93f0ffd05d1"} Feb 02 08:19:35 crc kubenswrapper[4842]: I0202 08:19:35.980685 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l4pbx" podStartSLOduration=2.55347175 podStartE2EDuration="4.980665026s" podCreationTimestamp="2026-02-02 08:19:31 +0000 UTC" firstStartedPulling="2026-02-02 08:19:32.909107122 +0000 UTC m=+5598.286375034" lastFinishedPulling="2026-02-02 08:19:35.336300358 +0000 UTC m=+5600.713568310" observedRunningTime="2026-02-02 08:19:35.976441831 +0000 UTC m=+5601.353709823" watchObservedRunningTime="2026-02-02 08:19:35.980665026 +0000 UTC m=+5601.357932948" Feb 02 08:19:38 crc kubenswrapper[4842]: I0202 08:19:38.433490 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:19:38 crc kubenswrapper[4842]: E0202 08:19:38.434436 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:19:42 crc kubenswrapper[4842]: I0202 08:19:42.201764 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:42 crc kubenswrapper[4842]: I0202 08:19:42.201884 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:43 crc kubenswrapper[4842]: I0202 08:19:43.267655 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l4pbx" podUID="62b266ca-ea4c-4fb2-a376-fb7ce2d341d7" containerName="registry-server" probeResult="failure" output=< Feb 02 08:19:43 crc kubenswrapper[4842]: timeout: failed to connect service ":50051" within 1s Feb 02 08:19:43 crc kubenswrapper[4842]: > Feb 02 08:19:50 crc kubenswrapper[4842]: I0202 08:19:50.434288 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:19:50 crc kubenswrapper[4842]: E0202 08:19:50.435000 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:19:52 crc kubenswrapper[4842]: I0202 08:19:52.268729 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:52 crc kubenswrapper[4842]: I0202 08:19:52.328923 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:52 crc kubenswrapper[4842]: I0202 08:19:52.523445 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l4pbx"] Feb 02 08:19:54 crc kubenswrapper[4842]: I0202 08:19:54.091923 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l4pbx" podUID="62b266ca-ea4c-4fb2-a376-fb7ce2d341d7" containerName="registry-server" containerID="cri-o://d0179fb461459dca8885f9993122c8d8bab088f71fdeafeadf0dc93f0ffd05d1" gracePeriod=2 Feb 02 08:19:54 crc kubenswrapper[4842]: I0202 08:19:54.584746 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:54 crc kubenswrapper[4842]: I0202 08:19:54.918779 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlqhr\" (UniqueName: \"kubernetes.io/projected/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-kube-api-access-mlqhr\") pod \"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7\" (UID: \"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7\") " Feb 02 08:19:54 crc kubenswrapper[4842]: I0202 08:19:54.918858 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-utilities\") pod \"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7\" (UID: \"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7\") " Feb 02 08:19:54 crc kubenswrapper[4842]: I0202 08:19:54.918897 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-catalog-content\") pod \"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7\" (UID: \"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7\") " Feb 02 08:19:54 crc kubenswrapper[4842]: I0202 08:19:54.920121 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-utilities" (OuterVolumeSpecName: "utilities") pod "62b266ca-ea4c-4fb2-a376-fb7ce2d341d7" (UID: "62b266ca-ea4c-4fb2-a376-fb7ce2d341d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:19:54 crc kubenswrapper[4842]: I0202 08:19:54.927012 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-kube-api-access-mlqhr" (OuterVolumeSpecName: "kube-api-access-mlqhr") pod "62b266ca-ea4c-4fb2-a376-fb7ce2d341d7" (UID: "62b266ca-ea4c-4fb2-a376-fb7ce2d341d7"). InnerVolumeSpecName "kube-api-access-mlqhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.020696 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlqhr\" (UniqueName: \"kubernetes.io/projected/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-kube-api-access-mlqhr\") on node \"crc\" DevicePath \"\"" Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.020735 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.098637 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "62b266ca-ea4c-4fb2-a376-fb7ce2d341d7" (UID: "62b266ca-ea4c-4fb2-a376-fb7ce2d341d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.100967 4842 generic.go:334] "Generic (PLEG): container finished" podID="62b266ca-ea4c-4fb2-a376-fb7ce2d341d7" containerID="d0179fb461459dca8885f9993122c8d8bab088f71fdeafeadf0dc93f0ffd05d1" exitCode=0 Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.101030 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4pbx" event={"ID":"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7","Type":"ContainerDied","Data":"d0179fb461459dca8885f9993122c8d8bab088f71fdeafeadf0dc93f0ffd05d1"} Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.101046 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.101065 4842 scope.go:117] "RemoveContainer" containerID="d0179fb461459dca8885f9993122c8d8bab088f71fdeafeadf0dc93f0ffd05d1" Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.101056 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4pbx" event={"ID":"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7","Type":"ContainerDied","Data":"17d09c4717c193ff0f39559deea84c2b67b4b56124ea4abb01b5757dc66fa47f"} Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.121707 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.135543 4842 scope.go:117] "RemoveContainer" containerID="056481dbcf76796892ba9d4b9d75af8fd5c86c57785da9a384f1e2725e7ebc7f" Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.156522 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l4pbx"] Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.161465 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l4pbx"] Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.183876 4842 scope.go:117] "RemoveContainer" containerID="9fbf6342f5aed563c2199345c3620453bcac644d49eb43961cc23dbb44e58370" Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.209355 4842 scope.go:117] "RemoveContainer" containerID="d0179fb461459dca8885f9993122c8d8bab088f71fdeafeadf0dc93f0ffd05d1" Feb 02 08:19:55 crc kubenswrapper[4842]: E0202 08:19:55.209943 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0179fb461459dca8885f9993122c8d8bab088f71fdeafeadf0dc93f0ffd05d1\": container with ID starting with d0179fb461459dca8885f9993122c8d8bab088f71fdeafeadf0dc93f0ffd05d1 not found: ID does not exist" containerID="d0179fb461459dca8885f9993122c8d8bab088f71fdeafeadf0dc93f0ffd05d1" Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.210001 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0179fb461459dca8885f9993122c8d8bab088f71fdeafeadf0dc93f0ffd05d1"} err="failed to get container status \"d0179fb461459dca8885f9993122c8d8bab088f71fdeafeadf0dc93f0ffd05d1\": rpc error: code = NotFound desc = could not find container \"d0179fb461459dca8885f9993122c8d8bab088f71fdeafeadf0dc93f0ffd05d1\": container with ID starting with d0179fb461459dca8885f9993122c8d8bab088f71fdeafeadf0dc93f0ffd05d1 not found: ID does not exist" Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.210035 4842 scope.go:117] "RemoveContainer" containerID="056481dbcf76796892ba9d4b9d75af8fd5c86c57785da9a384f1e2725e7ebc7f" Feb 02 08:19:55 crc kubenswrapper[4842]: E0202 08:19:55.210504 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"056481dbcf76796892ba9d4b9d75af8fd5c86c57785da9a384f1e2725e7ebc7f\": container with ID starting with 056481dbcf76796892ba9d4b9d75af8fd5c86c57785da9a384f1e2725e7ebc7f not found: ID does not exist" containerID="056481dbcf76796892ba9d4b9d75af8fd5c86c57785da9a384f1e2725e7ebc7f" Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.210543 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"056481dbcf76796892ba9d4b9d75af8fd5c86c57785da9a384f1e2725e7ebc7f"} err="failed to get container status \"056481dbcf76796892ba9d4b9d75af8fd5c86c57785da9a384f1e2725e7ebc7f\": rpc error: code = NotFound desc = could not find container \"056481dbcf76796892ba9d4b9d75af8fd5c86c57785da9a384f1e2725e7ebc7f\": container with ID starting with 056481dbcf76796892ba9d4b9d75af8fd5c86c57785da9a384f1e2725e7ebc7f not found: ID does not exist" Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.210570 4842 scope.go:117] "RemoveContainer" containerID="9fbf6342f5aed563c2199345c3620453bcac644d49eb43961cc23dbb44e58370" Feb 02 08:19:55 crc kubenswrapper[4842]: E0202 08:19:55.211075 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fbf6342f5aed563c2199345c3620453bcac644d49eb43961cc23dbb44e58370\": container with ID starting with 9fbf6342f5aed563c2199345c3620453bcac644d49eb43961cc23dbb44e58370 not found: ID does not exist" containerID="9fbf6342f5aed563c2199345c3620453bcac644d49eb43961cc23dbb44e58370" Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.211141 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fbf6342f5aed563c2199345c3620453bcac644d49eb43961cc23dbb44e58370"} err="failed to get container status \"9fbf6342f5aed563c2199345c3620453bcac644d49eb43961cc23dbb44e58370\": rpc error: code = NotFound desc = could not find container \"9fbf6342f5aed563c2199345c3620453bcac644d49eb43961cc23dbb44e58370\": container with ID starting with 9fbf6342f5aed563c2199345c3620453bcac644d49eb43961cc23dbb44e58370 not found: ID does not exist" Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.450030 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62b266ca-ea4c-4fb2-a376-fb7ce2d341d7" path="/var/lib/kubelet/pods/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7/volumes" Feb 02 08:20:03 crc kubenswrapper[4842]: I0202 08:20:03.433730 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:20:03 crc kubenswrapper[4842]: E0202 08:20:03.434875 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:20:16 crc kubenswrapper[4842]: I0202 08:20:16.434076 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:20:16 crc kubenswrapper[4842]: E0202 08:20:16.435256 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:20:29 crc kubenswrapper[4842]: I0202 08:20:29.433793 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:20:29 crc kubenswrapper[4842]: E0202 08:20:29.434789 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:20:43 crc kubenswrapper[4842]: I0202 08:20:43.434011 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:20:43 crc kubenswrapper[4842]: E0202 08:20:43.434751 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:20:57 crc kubenswrapper[4842]: I0202 08:20:57.433797 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:20:57 crc kubenswrapper[4842]: E0202 08:20:57.434775 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:21:12 crc kubenswrapper[4842]: I0202 08:21:12.433635 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:21:12 crc kubenswrapper[4842]: E0202 08:21:12.434772 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:21:23 crc kubenswrapper[4842]: I0202 08:21:23.434425 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:21:23 crc kubenswrapper[4842]: E0202 08:21:23.435489 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:21:35 crc kubenswrapper[4842]: I0202 08:21:35.441081 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:21:35 crc kubenswrapper[4842]: E0202 08:21:35.442335 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:21:50 crc kubenswrapper[4842]: I0202 08:21:50.434305 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:21:50 crc kubenswrapper[4842]: E0202 08:21:50.437923 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:22:02 crc kubenswrapper[4842]: I0202 08:22:02.433874 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:22:02 crc kubenswrapper[4842]: E0202 08:22:02.435169 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:22:14 crc kubenswrapper[4842]: I0202 08:22:14.433626 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:22:14 crc kubenswrapper[4842]: E0202 08:22:14.434425 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:22:29 crc kubenswrapper[4842]: I0202 08:22:29.435299 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:22:29 crc kubenswrapper[4842]: E0202 08:22:29.436258 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:22:40 crc kubenswrapper[4842]: I0202 08:22:40.433859 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:22:40 crc kubenswrapper[4842]: E0202 08:22:40.434680 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:22:54 crc kubenswrapper[4842]: I0202 08:22:54.434176 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:22:54 crc kubenswrapper[4842]: E0202 08:22:54.435051 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:23:09 crc kubenswrapper[4842]: I0202 08:23:09.434095 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:23:09 crc kubenswrapper[4842]: E0202 08:23:09.435010 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:23:10 crc kubenswrapper[4842]: I0202 08:23:10.775494 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-92n7v"] Feb 02 08:23:10 crc kubenswrapper[4842]: E0202 08:23:10.776702 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62b266ca-ea4c-4fb2-a376-fb7ce2d341d7" containerName="extract-utilities" Feb 02 08:23:10 crc kubenswrapper[4842]: I0202 08:23:10.776736 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="62b266ca-ea4c-4fb2-a376-fb7ce2d341d7" containerName="extract-utilities" Feb 02 08:23:10 crc kubenswrapper[4842]: E0202 08:23:10.776802 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62b266ca-ea4c-4fb2-a376-fb7ce2d341d7" containerName="registry-server" Feb 02 08:23:10 crc kubenswrapper[4842]: I0202 08:23:10.776821 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="62b266ca-ea4c-4fb2-a376-fb7ce2d341d7" containerName="registry-server" Feb 02 08:23:10 crc kubenswrapper[4842]: E0202 08:23:10.776856 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62b266ca-ea4c-4fb2-a376-fb7ce2d341d7" containerName="extract-content" Feb 02 08:23:10 crc kubenswrapper[4842]: I0202 08:23:10.776875 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="62b266ca-ea4c-4fb2-a376-fb7ce2d341d7" containerName="extract-content" Feb 02 08:23:10 crc kubenswrapper[4842]: I0202 08:23:10.777318 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="62b266ca-ea4c-4fb2-a376-fb7ce2d341d7" containerName="registry-server" Feb 02 08:23:10 crc kubenswrapper[4842]: I0202 08:23:10.779873 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:10 crc kubenswrapper[4842]: I0202 08:23:10.793618 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-92n7v"] Feb 02 08:23:10 crc kubenswrapper[4842]: I0202 08:23:10.965478 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-catalog-content\") pod \"redhat-marketplace-92n7v\" (UID: \"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6\") " pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:10 crc kubenswrapper[4842]: I0202 08:23:10.965563 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-utilities\") pod \"redhat-marketplace-92n7v\" (UID: \"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6\") " pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:10 crc kubenswrapper[4842]: I0202 08:23:10.965695 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x4sz\" (UniqueName: \"kubernetes.io/projected/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-kube-api-access-6x4sz\") pod \"redhat-marketplace-92n7v\" (UID: \"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6\") " pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:11 crc kubenswrapper[4842]: I0202 08:23:11.066894 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-utilities\") pod \"redhat-marketplace-92n7v\" (UID: \"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6\") " pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:11 crc kubenswrapper[4842]: I0202 08:23:11.067001 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x4sz\" (UniqueName: \"kubernetes.io/projected/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-kube-api-access-6x4sz\") pod \"redhat-marketplace-92n7v\" (UID: \"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6\") " pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:11 crc kubenswrapper[4842]: I0202 08:23:11.067292 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-catalog-content\") pod \"redhat-marketplace-92n7v\" (UID: \"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6\") " pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:11 crc kubenswrapper[4842]: I0202 08:23:11.067961 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-utilities\") pod \"redhat-marketplace-92n7v\" (UID: \"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6\") " pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:11 crc kubenswrapper[4842]: I0202 08:23:11.068129 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-catalog-content\") pod \"redhat-marketplace-92n7v\" (UID: \"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6\") " pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:11 crc kubenswrapper[4842]: I0202 08:23:11.087702 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x4sz\" (UniqueName: \"kubernetes.io/projected/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-kube-api-access-6x4sz\") pod \"redhat-marketplace-92n7v\" (UID: \"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6\") " pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:11 crc kubenswrapper[4842]: I0202 08:23:11.136207 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:11 crc kubenswrapper[4842]: I0202 08:23:11.628002 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-92n7v"] Feb 02 08:23:11 crc kubenswrapper[4842]: W0202 08:23:11.635437 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc20ef3d7_41e2_462b_b3d1_3cc95f3463c6.slice/crio-1a3d8468bca2319f51a0af14455ac46c3ac8b3a7588ab0e949c33c3733199525 WatchSource:0}: Error finding container 1a3d8468bca2319f51a0af14455ac46c3ac8b3a7588ab0e949c33c3733199525: Status 404 returned error can't find the container with id 1a3d8468bca2319f51a0af14455ac46c3ac8b3a7588ab0e949c33c3733199525 Feb 02 08:23:11 crc kubenswrapper[4842]: I0202 08:23:11.837013 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92n7v" event={"ID":"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6","Type":"ContainerStarted","Data":"1dcb8f41d001f1870065b37079845cb7510abd923c73e90e8de10ea629515ad4"} Feb 02 08:23:11 crc kubenswrapper[4842]: I0202 08:23:11.838549 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92n7v" event={"ID":"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6","Type":"ContainerStarted","Data":"1a3d8468bca2319f51a0af14455ac46c3ac8b3a7588ab0e949c33c3733199525"} Feb 02 08:23:12 crc kubenswrapper[4842]: I0202 08:23:12.845870 4842 generic.go:334] "Generic (PLEG): container finished" podID="c20ef3d7-41e2-462b-b3d1-3cc95f3463c6" containerID="1dcb8f41d001f1870065b37079845cb7510abd923c73e90e8de10ea629515ad4" exitCode=0 Feb 02 08:23:12 crc kubenswrapper[4842]: I0202 08:23:12.847412 4842 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 08:23:12 crc kubenswrapper[4842]: I0202 08:23:12.845922 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92n7v" event={"ID":"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6","Type":"ContainerDied","Data":"1dcb8f41d001f1870065b37079845cb7510abd923c73e90e8de10ea629515ad4"} Feb 02 08:23:13 crc kubenswrapper[4842]: I0202 08:23:13.858548 4842 generic.go:334] "Generic (PLEG): container finished" podID="c20ef3d7-41e2-462b-b3d1-3cc95f3463c6" containerID="4a17829ab7175ef6fec4865e377bf261b160d5a01d295d9f43824e8f8e9fcf69" exitCode=0 Feb 02 08:23:13 crc kubenswrapper[4842]: I0202 08:23:13.858641 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92n7v" event={"ID":"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6","Type":"ContainerDied","Data":"4a17829ab7175ef6fec4865e377bf261b160d5a01d295d9f43824e8f8e9fcf69"} Feb 02 08:23:14 crc kubenswrapper[4842]: I0202 08:23:14.869657 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92n7v" event={"ID":"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6","Type":"ContainerStarted","Data":"236c841c262b577ad25fab8290d0a53ab008f9f0ead3db09c39198d77ebd2bd8"} Feb 02 08:23:14 crc kubenswrapper[4842]: I0202 08:23:14.889713 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-92n7v" podStartSLOduration=3.421169337 podStartE2EDuration="4.889692867s" podCreationTimestamp="2026-02-02 08:23:10 +0000 UTC" firstStartedPulling="2026-02-02 08:23:12.847054747 +0000 UTC m=+5818.224322679" lastFinishedPulling="2026-02-02 08:23:14.315578297 +0000 UTC m=+5819.692846209" observedRunningTime="2026-02-02 08:23:14.886068757 +0000 UTC m=+5820.263336689" watchObservedRunningTime="2026-02-02 08:23:14.889692867 +0000 UTC m=+5820.266960779" Feb 02 08:23:21 crc kubenswrapper[4842]: I0202 08:23:21.136902 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:21 crc kubenswrapper[4842]: I0202 08:23:21.137296 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:21 crc kubenswrapper[4842]: I0202 08:23:21.194170 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:21 crc kubenswrapper[4842]: I0202 08:23:21.999043 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:22 crc kubenswrapper[4842]: I0202 08:23:22.070921 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-92n7v"] Feb 02 08:23:23 crc kubenswrapper[4842]: I0202 08:23:23.943301 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-92n7v" podUID="c20ef3d7-41e2-462b-b3d1-3cc95f3463c6" containerName="registry-server" containerID="cri-o://236c841c262b577ad25fab8290d0a53ab008f9f0ead3db09c39198d77ebd2bd8" gracePeriod=2 Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.433390 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:23:24 crc kubenswrapper[4842]: E0202 08:23:24.434267 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.459029 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.612407 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-utilities\") pod \"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6\" (UID: \"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6\") " Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.612771 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x4sz\" (UniqueName: \"kubernetes.io/projected/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-kube-api-access-6x4sz\") pod \"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6\" (UID: \"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6\") " Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.613025 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-catalog-content\") pod \"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6\" (UID: \"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6\") " Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.616064 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-utilities" (OuterVolumeSpecName: "utilities") pod "c20ef3d7-41e2-462b-b3d1-3cc95f3463c6" (UID: "c20ef3d7-41e2-462b-b3d1-3cc95f3463c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.619388 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-kube-api-access-6x4sz" (OuterVolumeSpecName: "kube-api-access-6x4sz") pod "c20ef3d7-41e2-462b-b3d1-3cc95f3463c6" (UID: "c20ef3d7-41e2-462b-b3d1-3cc95f3463c6"). InnerVolumeSpecName "kube-api-access-6x4sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.633667 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c20ef3d7-41e2-462b-b3d1-3cc95f3463c6" (UID: "c20ef3d7-41e2-462b-b3d1-3cc95f3463c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.714400 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.714433 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.714446 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x4sz\" (UniqueName: \"kubernetes.io/projected/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-kube-api-access-6x4sz\") on node \"crc\" DevicePath \"\"" Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.953464 4842 generic.go:334] "Generic (PLEG): container finished" podID="c20ef3d7-41e2-462b-b3d1-3cc95f3463c6" containerID="236c841c262b577ad25fab8290d0a53ab008f9f0ead3db09c39198d77ebd2bd8" exitCode=0 Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.953510 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92n7v" event={"ID":"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6","Type":"ContainerDied","Data":"236c841c262b577ad25fab8290d0a53ab008f9f0ead3db09c39198d77ebd2bd8"} Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.953575 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.953621 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92n7v" event={"ID":"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6","Type":"ContainerDied","Data":"1a3d8468bca2319f51a0af14455ac46c3ac8b3a7588ab0e949c33c3733199525"} Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.953648 4842 scope.go:117] "RemoveContainer" containerID="236c841c262b577ad25fab8290d0a53ab008f9f0ead3db09c39198d77ebd2bd8" Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.977638 4842 scope.go:117] "RemoveContainer" containerID="4a17829ab7175ef6fec4865e377bf261b160d5a01d295d9f43824e8f8e9fcf69" Feb 02 08:23:25 crc kubenswrapper[4842]: I0202 08:23:25.002367 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-92n7v"] Feb 02 08:23:25 crc kubenswrapper[4842]: I0202 08:23:25.006667 4842 scope.go:117] "RemoveContainer" containerID="1dcb8f41d001f1870065b37079845cb7510abd923c73e90e8de10ea629515ad4" Feb 02 08:23:25 crc kubenswrapper[4842]: I0202 08:23:25.009977 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-92n7v"] Feb 02 08:23:25 crc kubenswrapper[4842]: I0202 08:23:25.051974 4842 scope.go:117] "RemoveContainer" containerID="236c841c262b577ad25fab8290d0a53ab008f9f0ead3db09c39198d77ebd2bd8" Feb 02 08:23:25 crc kubenswrapper[4842]: E0202 08:23:25.052396 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"236c841c262b577ad25fab8290d0a53ab008f9f0ead3db09c39198d77ebd2bd8\": container with ID starting with 236c841c262b577ad25fab8290d0a53ab008f9f0ead3db09c39198d77ebd2bd8 not found: ID does not exist" containerID="236c841c262b577ad25fab8290d0a53ab008f9f0ead3db09c39198d77ebd2bd8" Feb 02 08:23:25 crc kubenswrapper[4842]: I0202 08:23:25.052446 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"236c841c262b577ad25fab8290d0a53ab008f9f0ead3db09c39198d77ebd2bd8"} err="failed to get container status \"236c841c262b577ad25fab8290d0a53ab008f9f0ead3db09c39198d77ebd2bd8\": rpc error: code = NotFound desc = could not find container \"236c841c262b577ad25fab8290d0a53ab008f9f0ead3db09c39198d77ebd2bd8\": container with ID starting with 236c841c262b577ad25fab8290d0a53ab008f9f0ead3db09c39198d77ebd2bd8 not found: ID does not exist" Feb 02 08:23:25 crc kubenswrapper[4842]: I0202 08:23:25.052477 4842 scope.go:117] "RemoveContainer" containerID="4a17829ab7175ef6fec4865e377bf261b160d5a01d295d9f43824e8f8e9fcf69" Feb 02 08:23:25 crc kubenswrapper[4842]: E0202 08:23:25.052819 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a17829ab7175ef6fec4865e377bf261b160d5a01d295d9f43824e8f8e9fcf69\": container with ID starting with 4a17829ab7175ef6fec4865e377bf261b160d5a01d295d9f43824e8f8e9fcf69 not found: ID does not exist" containerID="4a17829ab7175ef6fec4865e377bf261b160d5a01d295d9f43824e8f8e9fcf69" Feb 02 08:23:25 crc kubenswrapper[4842]: I0202 08:23:25.052844 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a17829ab7175ef6fec4865e377bf261b160d5a01d295d9f43824e8f8e9fcf69"} err="failed to get container status \"4a17829ab7175ef6fec4865e377bf261b160d5a01d295d9f43824e8f8e9fcf69\": rpc error: code = NotFound desc = could not find container \"4a17829ab7175ef6fec4865e377bf261b160d5a01d295d9f43824e8f8e9fcf69\": container with ID starting with 4a17829ab7175ef6fec4865e377bf261b160d5a01d295d9f43824e8f8e9fcf69 not found: ID does not exist" Feb 02 08:23:25 crc kubenswrapper[4842]: I0202 08:23:25.052860 4842 scope.go:117] "RemoveContainer" containerID="1dcb8f41d001f1870065b37079845cb7510abd923c73e90e8de10ea629515ad4" Feb 02 08:23:25 crc kubenswrapper[4842]: E0202 08:23:25.053263 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dcb8f41d001f1870065b37079845cb7510abd923c73e90e8de10ea629515ad4\": container with ID starting with 1dcb8f41d001f1870065b37079845cb7510abd923c73e90e8de10ea629515ad4 not found: ID does not exist" containerID="1dcb8f41d001f1870065b37079845cb7510abd923c73e90e8de10ea629515ad4" Feb 02 08:23:25 crc kubenswrapper[4842]: I0202 08:23:25.053289 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dcb8f41d001f1870065b37079845cb7510abd923c73e90e8de10ea629515ad4"} err="failed to get container status \"1dcb8f41d001f1870065b37079845cb7510abd923c73e90e8de10ea629515ad4\": rpc error: code = NotFound desc = could not find container \"1dcb8f41d001f1870065b37079845cb7510abd923c73e90e8de10ea629515ad4\": container with ID starting with 1dcb8f41d001f1870065b37079845cb7510abd923c73e90e8de10ea629515ad4 not found: ID does not exist" Feb 02 08:23:25 crc kubenswrapper[4842]: I0202 08:23:25.443508 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c20ef3d7-41e2-462b-b3d1-3cc95f3463c6" path="/var/lib/kubelet/pods/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6/volumes" Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.205200 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4s5zq"] Feb 02 08:23:28 crc kubenswrapper[4842]: E0202 08:23:28.206371 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c20ef3d7-41e2-462b-b3d1-3cc95f3463c6" containerName="extract-content" Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.206407 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c20ef3d7-41e2-462b-b3d1-3cc95f3463c6" containerName="extract-content" Feb 02 08:23:28 crc kubenswrapper[4842]: E0202 08:23:28.206478 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c20ef3d7-41e2-462b-b3d1-3cc95f3463c6" containerName="extract-utilities" Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.206496 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c20ef3d7-41e2-462b-b3d1-3cc95f3463c6" containerName="extract-utilities" Feb 02 08:23:28 crc kubenswrapper[4842]: E0202 08:23:28.206518 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c20ef3d7-41e2-462b-b3d1-3cc95f3463c6" containerName="registry-server" Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.206536 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c20ef3d7-41e2-462b-b3d1-3cc95f3463c6" containerName="registry-server" Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.206875 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="c20ef3d7-41e2-462b-b3d1-3cc95f3463c6" containerName="registry-server" Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.209330 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.222786 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4s5zq"] Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.373363 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-utilities\") pod \"certified-operators-4s5zq\" (UID: \"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f\") " pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.373786 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stnd8\" (UniqueName: \"kubernetes.io/projected/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-kube-api-access-stnd8\") pod \"certified-operators-4s5zq\" (UID: \"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f\") " pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.374005 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-catalog-content\") pod \"certified-operators-4s5zq\" (UID: \"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f\") " pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.475497 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-utilities\") pod \"certified-operators-4s5zq\" (UID: \"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f\") " pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.475588 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stnd8\" (UniqueName: \"kubernetes.io/projected/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-kube-api-access-stnd8\") pod \"certified-operators-4s5zq\" (UID: \"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f\") " pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.475653 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-catalog-content\") pod \"certified-operators-4s5zq\" (UID: \"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f\") " pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.476060 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-catalog-content\") pod \"certified-operators-4s5zq\" (UID: \"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f\") " pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.476070 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-utilities\") pod \"certified-operators-4s5zq\" (UID: \"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f\") " pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.496207 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stnd8\" (UniqueName: \"kubernetes.io/projected/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-kube-api-access-stnd8\") pod \"certified-operators-4s5zq\" (UID: \"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f\") " pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.549014 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.009467 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4s5zq"] Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.449551 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qzj89/must-gather-9skzq"] Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.451091 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qzj89/must-gather-9skzq" Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.452621 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qzj89"/"default-dockercfg-k7shm" Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.453010 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qzj89"/"openshift-service-ca.crt" Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.456678 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qzj89"/"kube-root-ca.crt" Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.457646 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qzj89/must-gather-9skzq"] Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.598791 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcjz8\" (UniqueName: \"kubernetes.io/projected/0d2d69ec-05f0-4d32-9003-71634c635ab6-kube-api-access-kcjz8\") pod \"must-gather-9skzq\" (UID: \"0d2d69ec-05f0-4d32-9003-71634c635ab6\") " pod="openshift-must-gather-qzj89/must-gather-9skzq" Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.598904 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0d2d69ec-05f0-4d32-9003-71634c635ab6-must-gather-output\") pod \"must-gather-9skzq\" (UID: \"0d2d69ec-05f0-4d32-9003-71634c635ab6\") " pod="openshift-must-gather-qzj89/must-gather-9skzq" Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.699911 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0d2d69ec-05f0-4d32-9003-71634c635ab6-must-gather-output\") pod \"must-gather-9skzq\" (UID: \"0d2d69ec-05f0-4d32-9003-71634c635ab6\") " pod="openshift-must-gather-qzj89/must-gather-9skzq" Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.700110 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcjz8\" (UniqueName: \"kubernetes.io/projected/0d2d69ec-05f0-4d32-9003-71634c635ab6-kube-api-access-kcjz8\") pod \"must-gather-9skzq\" (UID: \"0d2d69ec-05f0-4d32-9003-71634c635ab6\") " pod="openshift-must-gather-qzj89/must-gather-9skzq" Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.700270 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0d2d69ec-05f0-4d32-9003-71634c635ab6-must-gather-output\") pod \"must-gather-9skzq\" (UID: \"0d2d69ec-05f0-4d32-9003-71634c635ab6\") " pod="openshift-must-gather-qzj89/must-gather-9skzq" Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.724270 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcjz8\" (UniqueName: \"kubernetes.io/projected/0d2d69ec-05f0-4d32-9003-71634c635ab6-kube-api-access-kcjz8\") pod \"must-gather-9skzq\" (UID: \"0d2d69ec-05f0-4d32-9003-71634c635ab6\") " pod="openshift-must-gather-qzj89/must-gather-9skzq" Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.765178 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qzj89/must-gather-9skzq" Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.996686 4842 generic.go:334] "Generic (PLEG): container finished" podID="524ee812-fd5b-4a94-b4e7-6a26c9e52e7f" containerID="2a63711adc6d57e32132aa965e0453f07c1cf5cf8c5457c9f42f8ec9a99976a7" exitCode=0 Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.996738 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4s5zq" event={"ID":"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f","Type":"ContainerDied","Data":"2a63711adc6d57e32132aa965e0453f07c1cf5cf8c5457c9f42f8ec9a99976a7"} Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.997066 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4s5zq" event={"ID":"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f","Type":"ContainerStarted","Data":"cb6ecb9ed4cf1a283a792186876af6d935b160ffb9e9293bf5fd79f6d72b0634"} Feb 02 08:23:30 crc kubenswrapper[4842]: I0202 08:23:30.183982 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qzj89/must-gather-9skzq"] Feb 02 08:23:30 crc kubenswrapper[4842]: W0202 08:23:30.194616 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d2d69ec_05f0_4d32_9003_71634c635ab6.slice/crio-0ffb4e25f0260fc66de621c9e29c0bf4056e64803573898136b690c80e7ff3cf WatchSource:0}: Error finding container 0ffb4e25f0260fc66de621c9e29c0bf4056e64803573898136b690c80e7ff3cf: Status 404 returned error can't find the container with id 0ffb4e25f0260fc66de621c9e29c0bf4056e64803573898136b690c80e7ff3cf Feb 02 08:23:31 crc kubenswrapper[4842]: I0202 08:23:31.005689 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qzj89/must-gather-9skzq" event={"ID":"0d2d69ec-05f0-4d32-9003-71634c635ab6","Type":"ContainerStarted","Data":"0ffb4e25f0260fc66de621c9e29c0bf4056e64803573898136b690c80e7ff3cf"} Feb 02 08:23:31 crc kubenswrapper[4842]: I0202 08:23:31.009572 4842 generic.go:334] "Generic (PLEG): container finished" podID="524ee812-fd5b-4a94-b4e7-6a26c9e52e7f" containerID="45580c490578fc85241fa10f976d4bf6ca664f05cc6212b4c54d6ffd83f69c0c" exitCode=0 Feb 02 08:23:31 crc kubenswrapper[4842]: I0202 08:23:31.009616 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4s5zq" event={"ID":"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f","Type":"ContainerDied","Data":"45580c490578fc85241fa10f976d4bf6ca664f05cc6212b4c54d6ffd83f69c0c"} Feb 02 08:23:32 crc kubenswrapper[4842]: I0202 08:23:32.020924 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4s5zq" event={"ID":"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f","Type":"ContainerStarted","Data":"6a057d885aa6a535858a03923ddfde4b21f3995c6289edd3885366844a84ab4a"} Feb 02 08:23:32 crc kubenswrapper[4842]: I0202 08:23:32.049856 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4s5zq" podStartSLOduration=2.604437819 podStartE2EDuration="4.049840378s" podCreationTimestamp="2026-02-02 08:23:28 +0000 UTC" firstStartedPulling="2026-02-02 08:23:29.99839159 +0000 UTC m=+5835.375659502" lastFinishedPulling="2026-02-02 08:23:31.443794109 +0000 UTC m=+5836.821062061" observedRunningTime="2026-02-02 08:23:32.046471184 +0000 UTC m=+5837.423739176" watchObservedRunningTime="2026-02-02 08:23:32.049840378 +0000 UTC m=+5837.427108290" Feb 02 08:23:36 crc kubenswrapper[4842]: I0202 08:23:36.433596 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:23:36 crc kubenswrapper[4842]: E0202 08:23:36.434256 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:23:37 crc kubenswrapper[4842]: I0202 08:23:37.064491 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qzj89/must-gather-9skzq" event={"ID":"0d2d69ec-05f0-4d32-9003-71634c635ab6","Type":"ContainerStarted","Data":"b501e90b320415eedc57d5d97621c4286482ad34559763a80d58ed79fe0c298d"} Feb 02 08:23:37 crc kubenswrapper[4842]: I0202 08:23:37.065248 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qzj89/must-gather-9skzq" event={"ID":"0d2d69ec-05f0-4d32-9003-71634c635ab6","Type":"ContainerStarted","Data":"e6a8709be9b242969c88ec63b2238ff790746bf3bcc9e5f6c743f53912a02b12"} Feb 02 08:23:37 crc kubenswrapper[4842]: I0202 08:23:37.087360 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qzj89/must-gather-9skzq" podStartSLOduration=1.764619947 podStartE2EDuration="8.087335922s" podCreationTimestamp="2026-02-02 08:23:29 +0000 UTC" firstStartedPulling="2026-02-02 08:23:30.197144082 +0000 UTC m=+5835.574412034" lastFinishedPulling="2026-02-02 08:23:36.519860107 +0000 UTC m=+5841.897128009" observedRunningTime="2026-02-02 08:23:37.081351574 +0000 UTC m=+5842.458619516" watchObservedRunningTime="2026-02-02 08:23:37.087335922 +0000 UTC m=+5842.464603874" Feb 02 08:23:38 crc kubenswrapper[4842]: I0202 08:23:38.549465 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:38 crc kubenswrapper[4842]: I0202 08:23:38.549836 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:38 crc kubenswrapper[4842]: I0202 08:23:38.633124 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:39 crc kubenswrapper[4842]: I0202 08:23:39.142576 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:39 crc kubenswrapper[4842]: I0202 08:23:39.220032 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4s5zq"] Feb 02 08:23:41 crc kubenswrapper[4842]: I0202 08:23:41.090010 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4s5zq" podUID="524ee812-fd5b-4a94-b4e7-6a26c9e52e7f" containerName="registry-server" containerID="cri-o://6a057d885aa6a535858a03923ddfde4b21f3995c6289edd3885366844a84ab4a" gracePeriod=2 Feb 02 08:23:41 crc kubenswrapper[4842]: I0202 08:23:41.998756 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.097530 4842 generic.go:334] "Generic (PLEG): container finished" podID="524ee812-fd5b-4a94-b4e7-6a26c9e52e7f" containerID="6a057d885aa6a535858a03923ddfde4b21f3995c6289edd3885366844a84ab4a" exitCode=0 Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.097576 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4s5zq" event={"ID":"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f","Type":"ContainerDied","Data":"6a057d885aa6a535858a03923ddfde4b21f3995c6289edd3885366844a84ab4a"} Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.097605 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4s5zq" event={"ID":"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f","Type":"ContainerDied","Data":"cb6ecb9ed4cf1a283a792186876af6d935b160ffb9e9293bf5fd79f6d72b0634"} Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.097625 4842 scope.go:117] "RemoveContainer" containerID="6a057d885aa6a535858a03923ddfde4b21f3995c6289edd3885366844a84ab4a" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.097757 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.103731 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stnd8\" (UniqueName: \"kubernetes.io/projected/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-kube-api-access-stnd8\") pod \"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f\" (UID: \"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f\") " Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.103808 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-catalog-content\") pod \"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f\" (UID: \"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f\") " Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.103905 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-utilities\") pod \"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f\" (UID: \"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f\") " Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.105489 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-utilities" (OuterVolumeSpecName: "utilities") pod "524ee812-fd5b-4a94-b4e7-6a26c9e52e7f" (UID: "524ee812-fd5b-4a94-b4e7-6a26c9e52e7f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.112731 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-kube-api-access-stnd8" (OuterVolumeSpecName: "kube-api-access-stnd8") pod "524ee812-fd5b-4a94-b4e7-6a26c9e52e7f" (UID: "524ee812-fd5b-4a94-b4e7-6a26c9e52e7f"). InnerVolumeSpecName "kube-api-access-stnd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.113841 4842 scope.go:117] "RemoveContainer" containerID="45580c490578fc85241fa10f976d4bf6ca664f05cc6212b4c54d6ffd83f69c0c" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.143867 4842 scope.go:117] "RemoveContainer" containerID="2a63711adc6d57e32132aa965e0453f07c1cf5cf8c5457c9f42f8ec9a99976a7" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.155467 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "524ee812-fd5b-4a94-b4e7-6a26c9e52e7f" (UID: "524ee812-fd5b-4a94-b4e7-6a26c9e52e7f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.173405 4842 scope.go:117] "RemoveContainer" containerID="6a057d885aa6a535858a03923ddfde4b21f3995c6289edd3885366844a84ab4a" Feb 02 08:23:42 crc kubenswrapper[4842]: E0202 08:23:42.173895 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a057d885aa6a535858a03923ddfde4b21f3995c6289edd3885366844a84ab4a\": container with ID starting with 6a057d885aa6a535858a03923ddfde4b21f3995c6289edd3885366844a84ab4a not found: ID does not exist" containerID="6a057d885aa6a535858a03923ddfde4b21f3995c6289edd3885366844a84ab4a" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.173937 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a057d885aa6a535858a03923ddfde4b21f3995c6289edd3885366844a84ab4a"} err="failed to get container status \"6a057d885aa6a535858a03923ddfde4b21f3995c6289edd3885366844a84ab4a\": rpc error: code = NotFound desc = could not find container \"6a057d885aa6a535858a03923ddfde4b21f3995c6289edd3885366844a84ab4a\": container with ID starting with 6a057d885aa6a535858a03923ddfde4b21f3995c6289edd3885366844a84ab4a not found: ID does not exist" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.173987 4842 scope.go:117] "RemoveContainer" containerID="45580c490578fc85241fa10f976d4bf6ca664f05cc6212b4c54d6ffd83f69c0c" Feb 02 08:23:42 crc kubenswrapper[4842]: E0202 08:23:42.174583 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45580c490578fc85241fa10f976d4bf6ca664f05cc6212b4c54d6ffd83f69c0c\": container with ID starting with 45580c490578fc85241fa10f976d4bf6ca664f05cc6212b4c54d6ffd83f69c0c not found: ID does not exist" containerID="45580c490578fc85241fa10f976d4bf6ca664f05cc6212b4c54d6ffd83f69c0c" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.174607 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45580c490578fc85241fa10f976d4bf6ca664f05cc6212b4c54d6ffd83f69c0c"} err="failed to get container status \"45580c490578fc85241fa10f976d4bf6ca664f05cc6212b4c54d6ffd83f69c0c\": rpc error: code = NotFound desc = could not find container \"45580c490578fc85241fa10f976d4bf6ca664f05cc6212b4c54d6ffd83f69c0c\": container with ID starting with 45580c490578fc85241fa10f976d4bf6ca664f05cc6212b4c54d6ffd83f69c0c not found: ID does not exist" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.174623 4842 scope.go:117] "RemoveContainer" containerID="2a63711adc6d57e32132aa965e0453f07c1cf5cf8c5457c9f42f8ec9a99976a7" Feb 02 08:23:42 crc kubenswrapper[4842]: E0202 08:23:42.174972 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a63711adc6d57e32132aa965e0453f07c1cf5cf8c5457c9f42f8ec9a99976a7\": container with ID starting with 2a63711adc6d57e32132aa965e0453f07c1cf5cf8c5457c9f42f8ec9a99976a7 not found: ID does not exist" containerID="2a63711adc6d57e32132aa965e0453f07c1cf5cf8c5457c9f42f8ec9a99976a7" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.174992 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a63711adc6d57e32132aa965e0453f07c1cf5cf8c5457c9f42f8ec9a99976a7"} err="failed to get container status \"2a63711adc6d57e32132aa965e0453f07c1cf5cf8c5457c9f42f8ec9a99976a7\": rpc error: code = NotFound desc = could not find container \"2a63711adc6d57e32132aa965e0453f07c1cf5cf8c5457c9f42f8ec9a99976a7\": container with ID starting with 2a63711adc6d57e32132aa965e0453f07c1cf5cf8c5457c9f42f8ec9a99976a7 not found: ID does not exist" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.205395 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.205427 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.205437 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stnd8\" (UniqueName: \"kubernetes.io/projected/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-kube-api-access-stnd8\") on node \"crc\" DevicePath \"\"" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.436860 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4s5zq"] Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.444632 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4s5zq"] Feb 02 08:23:43 crc kubenswrapper[4842]: I0202 08:23:43.441481 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="524ee812-fd5b-4a94-b4e7-6a26c9e52e7f" path="/var/lib/kubelet/pods/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f/volumes" Feb 02 08:23:50 crc kubenswrapper[4842]: I0202 08:23:50.433572 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:23:50 crc kubenswrapper[4842]: E0202 08:23:50.434169 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:24:01 crc kubenswrapper[4842]: I0202 08:24:01.434108 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:24:01 crc kubenswrapper[4842]: E0202 08:24:01.435052 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:24:13 crc kubenswrapper[4842]: I0202 08:24:13.434084 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:24:14 crc kubenswrapper[4842]: I0202 08:24:14.362415 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"701d661caf384deb6b8444b74ed46fa7b3bf20ba994db92caac6b1a337d1e11f"} Feb 02 08:24:42 crc kubenswrapper[4842]: I0202 08:24:42.663565 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr_3d9034b5-b9d6-4e70-8cae-f6226cd41d78/util/0.log" Feb 02 08:24:42 crc kubenswrapper[4842]: I0202 08:24:42.801895 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr_3d9034b5-b9d6-4e70-8cae-f6226cd41d78/util/0.log" Feb 02 08:24:42 crc kubenswrapper[4842]: I0202 08:24:42.825939 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr_3d9034b5-b9d6-4e70-8cae-f6226cd41d78/pull/0.log" Feb 02 08:24:42 crc kubenswrapper[4842]: I0202 08:24:42.906158 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr_3d9034b5-b9d6-4e70-8cae-f6226cd41d78/pull/0.log" Feb 02 08:24:43 crc kubenswrapper[4842]: I0202 08:24:43.002985 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr_3d9034b5-b9d6-4e70-8cae-f6226cd41d78/pull/0.log" Feb 02 08:24:43 crc kubenswrapper[4842]: I0202 08:24:43.031722 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr_3d9034b5-b9d6-4e70-8cae-f6226cd41d78/util/0.log" Feb 02 08:24:43 crc kubenswrapper[4842]: I0202 08:24:43.032387 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr_3d9034b5-b9d6-4e70-8cae-f6226cd41d78/extract/0.log" Feb 02 08:24:43 crc kubenswrapper[4842]: I0202 08:24:43.241751 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-jknjh_79c1d3d0-ca85-4bbf-a7a7-74d260b5d4b1/manager/0.log" Feb 02 08:24:43 crc kubenswrapper[4842]: I0202 08:24:43.246024 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-stkw6_c679df42-e383-4a11-a50d-af9dbd4c4eb0/manager/0.log" Feb 02 08:24:43 crc kubenswrapper[4842]: I0202 08:24:43.392731 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-4hrlz_bda41d33-cd37-4c4d-99d6-3808993000b4/manager/0.log" Feb 02 08:24:43 crc kubenswrapper[4842]: I0202 08:24:43.458422 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-xq5nz_bd7497e1-afb6-44b5-8270-1021f837a65a/manager/0.log" Feb 02 08:24:43 crc kubenswrapper[4842]: I0202 08:24:43.545499 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-96sfj_17af9a3f-7823-4340-bebc-e50e11807467/manager/0.log" Feb 02 08:24:43 crc kubenswrapper[4842]: I0202 08:24:43.639718 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-skdgw_95850a5b-9e70-4f77-86ee-ff016eae6e7e/manager/0.log" Feb 02 08:24:43 crc kubenswrapper[4842]: I0202 08:24:43.918369 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-b9qjw_a020d6c0-e749-4442-93e8-64a4c463e9d5/manager/0.log" Feb 02 08:24:43 crc kubenswrapper[4842]: I0202 08:24:43.942143 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-jmvqq_0222c7fe-6311-4445-bf7f-e43fcb5ec5f9/manager/0.log" Feb 02 08:24:44 crc kubenswrapper[4842]: I0202 08:24:44.091470 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-nzz4p_46313c01-1f03-4185-b7c4-2da5420bd703/manager/0.log" Feb 02 08:24:44 crc kubenswrapper[4842]: I0202 08:24:44.109461 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-kz2zn_590654af-c639-4e9d-b821-c6caa1016695/manager/0.log" Feb 02 08:24:44 crc kubenswrapper[4842]: I0202 08:24:44.277099 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-nsf9v_bfe64bf6-fea9-4b04-b4ff-74fe4b9c2ece/manager/0.log" Feb 02 08:24:44 crc kubenswrapper[4842]: I0202 08:24:44.349821 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-4zk9c_95d96e63-61f2-4d8d-be72-562384cb6f23/manager/0.log" Feb 02 08:24:44 crc kubenswrapper[4842]: I0202 08:24:44.508713 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-wpm9z_60d10db6-9c42-471b-84fb-58e9c04c60fc/manager/0.log" Feb 02 08:24:44 crc kubenswrapper[4842]: I0202 08:24:44.525676 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-c9lwb_b7d68fac-cffb-4dd6-8c1b-4537a3a36571/manager/0.log" Feb 02 08:24:44 crc kubenswrapper[4842]: I0202 08:24:44.650248 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb_5e7a9701-ed45-4289-8272-f850efbf1e75/manager/0.log" Feb 02 08:24:44 crc kubenswrapper[4842]: I0202 08:24:44.804680 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-757f46c65d-gfksg_3081c94c-e2f4-48b5-90b5-8bcc58234a9b/operator/0.log" Feb 02 08:24:45 crc kubenswrapper[4842]: I0202 08:24:45.032439 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-5549s_e2e2a93a-9c50-4769-9983-e51f49c374d5/registry-server/0.log" Feb 02 08:24:45 crc kubenswrapper[4842]: I0202 08:24:45.183575 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-d8nns_255c38ec-b5b8-4017-94b8-93553884ed09/manager/0.log" Feb 02 08:24:45 crc kubenswrapper[4842]: I0202 08:24:45.258998 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-qlxtv_58dd3197-be46-474d-84f5-c066a9483a52/manager/0.log" Feb 02 08:24:45 crc kubenswrapper[4842]: I0202 08:24:45.432041 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-zbqhn_1fffe017-3a94-4565-9778-ccea208aa8cc/operator/0.log" Feb 02 08:24:45 crc kubenswrapper[4842]: I0202 08:24:45.476002 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6b6f655c79-bwmdm_6b1810ad-df0b-44b5-8ba8-953039b85411/manager/0.log" Feb 02 08:24:45 crc kubenswrapper[4842]: I0202 08:24:45.713754 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-lbjfv_6344fbd8-d71a-4461-ad9a-ad71e339ba03/manager/0.log" Feb 02 08:24:45 crc kubenswrapper[4842]: I0202 08:24:45.733269 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-q7vh6_7db6967e-a602-49a0-83f6-e1caff831173/manager/0.log" Feb 02 08:24:45 crc kubenswrapper[4842]: I0202 08:24:45.950167 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-4q9m5_3fb9fda7-8167-4f3d-947b-3e002278ad99/manager/0.log" Feb 02 08:24:45 crc kubenswrapper[4842]: I0202 08:24:45.960443 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-4ndxm_de128384-b923-4536-a485-33e65a1b7e04/manager/0.log" Feb 02 08:25:05 crc kubenswrapper[4842]: I0202 08:25:05.695999 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-gnmkq_99922ba3-dd03-4c94-9663-9c530f7b3ad0/control-plane-machine-set-operator/0.log" Feb 02 08:25:05 crc kubenswrapper[4842]: I0202 08:25:05.800519 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-qdspj_45dcaecb-f74e-4eaf-886a-28b6632f8d44/kube-rbac-proxy/0.log" Feb 02 08:25:05 crc kubenswrapper[4842]: I0202 08:25:05.860818 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-qdspj_45dcaecb-f74e-4eaf-886a-28b6632f8d44/machine-api-operator/0.log" Feb 02 08:25:19 crc kubenswrapper[4842]: I0202 08:25:19.833209 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-446xj_ffbe6b41-d1da-4aec-bbfd-376c2f53a962/cert-manager-controller/0.log" Feb 02 08:25:20 crc kubenswrapper[4842]: I0202 08:25:20.003913 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-j6288_d7710841-a6c0-41ce-a408-f5940ab76922/cert-manager-cainjector/0.log" Feb 02 08:25:20 crc kubenswrapper[4842]: I0202 08:25:20.050663 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-hj9fx_466ec5f5-a1b9-439d-a9d6-d5dbbe8d16c9/cert-manager-webhook/0.log" Feb 02 08:25:34 crc kubenswrapper[4842]: I0202 08:25:34.087825 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-z2jg2_1875099f-a0f5-4ba0-b757-35755a6d0bcd/nmstate-console-plugin/0.log" Feb 02 08:25:34 crc kubenswrapper[4842]: I0202 08:25:34.193592 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-hrqrp_558d578f-dad2-4317-8efd-628e30fe306e/nmstate-handler/0.log" Feb 02 08:25:34 crc kubenswrapper[4842]: I0202 08:25:34.257146 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-h4nv5_a4c06cff-e4b9-41be-a253-b1bf70dc1dc8/kube-rbac-proxy/0.log" Feb 02 08:25:34 crc kubenswrapper[4842]: I0202 08:25:34.300733 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-h4nv5_a4c06cff-e4b9-41be-a253-b1bf70dc1dc8/nmstate-metrics/0.log" Feb 02 08:25:34 crc kubenswrapper[4842]: I0202 08:25:34.435613 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-6qznw_3e9d6ba3-9c88-4425-87b9-8a5abd664ce7/nmstate-operator/0.log" Feb 02 08:25:34 crc kubenswrapper[4842]: I0202 08:25:34.463699 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-ctgl4_a9864264-6d23-4a03-8464-6b52a81c01d1/nmstate-webhook/0.log" Feb 02 08:26:03 crc kubenswrapper[4842]: I0202 08:26:03.431649 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-7h9kp_890c2fc6-f70e-47e4-8578-908ec14d719f/kube-rbac-proxy/0.log" Feb 02 08:26:03 crc kubenswrapper[4842]: I0202 08:26:03.622732 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/cp-frr-files/0.log" Feb 02 08:26:03 crc kubenswrapper[4842]: I0202 08:26:03.737041 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-7h9kp_890c2fc6-f70e-47e4-8578-908ec14d719f/controller/0.log" Feb 02 08:26:03 crc kubenswrapper[4842]: I0202 08:26:03.823322 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/cp-frr-files/0.log" Feb 02 08:26:03 crc kubenswrapper[4842]: I0202 08:26:03.841046 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/cp-metrics/0.log" Feb 02 08:26:03 crc kubenswrapper[4842]: I0202 08:26:03.846986 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/cp-reloader/0.log" Feb 02 08:26:03 crc kubenswrapper[4842]: I0202 08:26:03.929012 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/cp-reloader/0.log" Feb 02 08:26:04 crc kubenswrapper[4842]: I0202 08:26:04.104641 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/cp-frr-files/0.log" Feb 02 08:26:04 crc kubenswrapper[4842]: I0202 08:26:04.120069 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/cp-reloader/0.log" Feb 02 08:26:04 crc kubenswrapper[4842]: I0202 08:26:04.150814 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/cp-metrics/0.log" Feb 02 08:26:04 crc kubenswrapper[4842]: I0202 08:26:04.211268 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/cp-metrics/0.log" Feb 02 08:26:04 crc kubenswrapper[4842]: I0202 08:26:04.338760 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/cp-metrics/0.log" Feb 02 08:26:04 crc kubenswrapper[4842]: I0202 08:26:04.356813 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/cp-frr-files/0.log" Feb 02 08:26:04 crc kubenswrapper[4842]: I0202 08:26:04.370629 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/cp-reloader/0.log" Feb 02 08:26:04 crc kubenswrapper[4842]: I0202 08:26:04.420964 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/controller/0.log" Feb 02 08:26:04 crc kubenswrapper[4842]: I0202 08:26:04.550712 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/frr-metrics/0.log" Feb 02 08:26:04 crc kubenswrapper[4842]: I0202 08:26:04.552463 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/kube-rbac-proxy/0.log" Feb 02 08:26:04 crc kubenswrapper[4842]: I0202 08:26:04.621667 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/kube-rbac-proxy-frr/0.log" Feb 02 08:26:04 crc kubenswrapper[4842]: I0202 08:26:04.814956 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/reloader/0.log" Feb 02 08:26:04 crc kubenswrapper[4842]: I0202 08:26:04.818302 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-ksx75_412f3125-792a-4cb4-858e-e0376903066a/frr-k8s-webhook-server/0.log" Feb 02 08:26:05 crc kubenswrapper[4842]: I0202 08:26:05.008607 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-74749cc964-2p2rc_b3b00acd-6687-457f-8744-7057f840e5bd/manager/0.log" Feb 02 08:26:05 crc kubenswrapper[4842]: I0202 08:26:05.216200 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7f569b8d8f-wvbf9_793714c2-9e47-4e82-a201-e2e8ac9d7bff/webhook-server/0.log" Feb 02 08:26:05 crc kubenswrapper[4842]: I0202 08:26:05.242175 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-74hmd_3016a0a1-abd6-486a-af0b-cf4c7b8db672/kube-rbac-proxy/0.log" Feb 02 08:26:05 crc kubenswrapper[4842]: I0202 08:26:05.799921 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-74hmd_3016a0a1-abd6-486a-af0b-cf4c7b8db672/speaker/0.log" Feb 02 08:26:05 crc kubenswrapper[4842]: I0202 08:26:05.984411 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/frr/0.log" Feb 02 08:26:21 crc kubenswrapper[4842]: I0202 08:26:21.232670 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp_bb4e0f2b-3826-4669-8732-05eb885adfe5/util/0.log" Feb 02 08:26:21 crc kubenswrapper[4842]: I0202 08:26:21.394266 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp_bb4e0f2b-3826-4669-8732-05eb885adfe5/util/0.log" Feb 02 08:26:21 crc kubenswrapper[4842]: I0202 08:26:21.427996 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp_bb4e0f2b-3826-4669-8732-05eb885adfe5/pull/0.log" Feb 02 08:26:21 crc kubenswrapper[4842]: I0202 08:26:21.456621 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp_bb4e0f2b-3826-4669-8732-05eb885adfe5/pull/0.log" Feb 02 08:26:21 crc kubenswrapper[4842]: I0202 08:26:21.638295 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp_bb4e0f2b-3826-4669-8732-05eb885adfe5/pull/0.log" Feb 02 08:26:21 crc kubenswrapper[4842]: I0202 08:26:21.638334 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp_bb4e0f2b-3826-4669-8732-05eb885adfe5/extract/0.log" Feb 02 08:26:21 crc kubenswrapper[4842]: I0202 08:26:21.659715 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp_bb4e0f2b-3826-4669-8732-05eb885adfe5/util/0.log" Feb 02 08:26:21 crc kubenswrapper[4842]: I0202 08:26:21.802404 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n_7e244b75-9c3a-4f20-9bd7-071fb2cc7883/util/0.log" Feb 02 08:26:21 crc kubenswrapper[4842]: I0202 08:26:21.983619 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n_7e244b75-9c3a-4f20-9bd7-071fb2cc7883/pull/0.log" Feb 02 08:26:21 crc kubenswrapper[4842]: I0202 08:26:21.994827 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n_7e244b75-9c3a-4f20-9bd7-071fb2cc7883/pull/0.log" Feb 02 08:26:22 crc kubenswrapper[4842]: I0202 08:26:22.006586 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n_7e244b75-9c3a-4f20-9bd7-071fb2cc7883/util/0.log" Feb 02 08:26:22 crc kubenswrapper[4842]: I0202 08:26:22.205758 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n_7e244b75-9c3a-4f20-9bd7-071fb2cc7883/util/0.log" Feb 02 08:26:22 crc kubenswrapper[4842]: I0202 08:26:22.207058 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n_7e244b75-9c3a-4f20-9bd7-071fb2cc7883/extract/0.log" Feb 02 08:26:22 crc kubenswrapper[4842]: I0202 08:26:22.225146 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n_7e244b75-9c3a-4f20-9bd7-071fb2cc7883/pull/0.log" Feb 02 08:26:22 crc kubenswrapper[4842]: I0202 08:26:22.403755 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw_68358186-3b13-493a-9141-c206629af46e/util/0.log" Feb 02 08:26:22 crc kubenswrapper[4842]: I0202 08:26:22.650690 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw_68358186-3b13-493a-9141-c206629af46e/util/0.log" Feb 02 08:26:22 crc kubenswrapper[4842]: I0202 08:26:22.663601 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw_68358186-3b13-493a-9141-c206629af46e/pull/0.log" Feb 02 08:26:22 crc kubenswrapper[4842]: I0202 08:26:22.701122 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw_68358186-3b13-493a-9141-c206629af46e/pull/0.log" Feb 02 08:26:22 crc kubenswrapper[4842]: I0202 08:26:22.903369 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw_68358186-3b13-493a-9141-c206629af46e/util/0.log" Feb 02 08:26:22 crc kubenswrapper[4842]: I0202 08:26:22.910932 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw_68358186-3b13-493a-9141-c206629af46e/pull/0.log" Feb 02 08:26:22 crc kubenswrapper[4842]: I0202 08:26:22.969438 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw_68358186-3b13-493a-9141-c206629af46e/extract/0.log" Feb 02 08:26:23 crc kubenswrapper[4842]: I0202 08:26:23.106065 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7hzjr_940dd57b-92a3-4e95-b3b4-5df0efe013b1/extract-utilities/0.log" Feb 02 08:26:23 crc kubenswrapper[4842]: I0202 08:26:23.292290 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7hzjr_940dd57b-92a3-4e95-b3b4-5df0efe013b1/extract-content/0.log" Feb 02 08:26:23 crc kubenswrapper[4842]: I0202 08:26:23.303853 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7hzjr_940dd57b-92a3-4e95-b3b4-5df0efe013b1/extract-content/0.log" Feb 02 08:26:23 crc kubenswrapper[4842]: I0202 08:26:23.480423 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7hzjr_940dd57b-92a3-4e95-b3b4-5df0efe013b1/extract-utilities/0.log" Feb 02 08:26:23 crc kubenswrapper[4842]: I0202 08:26:23.627667 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7hzjr_940dd57b-92a3-4e95-b3b4-5df0efe013b1/extract-utilities/0.log" Feb 02 08:26:23 crc kubenswrapper[4842]: I0202 08:26:23.645764 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7hzjr_940dd57b-92a3-4e95-b3b4-5df0efe013b1/extract-content/0.log" Feb 02 08:26:23 crc kubenswrapper[4842]: I0202 08:26:23.842436 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d9hpw_6af4d552-478d-4a9f-8fcb-8a4b30a29f61/extract-utilities/0.log" Feb 02 08:26:24 crc kubenswrapper[4842]: I0202 08:26:24.021681 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d9hpw_6af4d552-478d-4a9f-8fcb-8a4b30a29f61/extract-utilities/0.log" Feb 02 08:26:24 crc kubenswrapper[4842]: I0202 08:26:24.044091 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7hzjr_940dd57b-92a3-4e95-b3b4-5df0efe013b1/registry-server/0.log" Feb 02 08:26:24 crc kubenswrapper[4842]: I0202 08:26:24.049851 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d9hpw_6af4d552-478d-4a9f-8fcb-8a4b30a29f61/extract-content/0.log" Feb 02 08:26:24 crc kubenswrapper[4842]: I0202 08:26:24.077627 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d9hpw_6af4d552-478d-4a9f-8fcb-8a4b30a29f61/extract-content/0.log" Feb 02 08:26:24 crc kubenswrapper[4842]: I0202 08:26:24.215926 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d9hpw_6af4d552-478d-4a9f-8fcb-8a4b30a29f61/extract-utilities/0.log" Feb 02 08:26:24 crc kubenswrapper[4842]: I0202 08:26:24.254006 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d9hpw_6af4d552-478d-4a9f-8fcb-8a4b30a29f61/extract-content/0.log" Feb 02 08:26:24 crc kubenswrapper[4842]: I0202 08:26:24.476939 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-vbb7f_57f599bc-2735-4763-8510-fe623d36bd10/marketplace-operator/0.log" Feb 02 08:26:24 crc kubenswrapper[4842]: I0202 08:26:24.610372 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d9hpw_6af4d552-478d-4a9f-8fcb-8a4b30a29f61/registry-server/0.log" Feb 02 08:26:24 crc kubenswrapper[4842]: I0202 08:26:24.621586 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sw8ll_7ea1df1c-0a15-44a8-9bb6-9f4513c3b482/extract-utilities/0.log" Feb 02 08:26:24 crc kubenswrapper[4842]: I0202 08:26:24.704998 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sw8ll_7ea1df1c-0a15-44a8-9bb6-9f4513c3b482/extract-utilities/0.log" Feb 02 08:26:24 crc kubenswrapper[4842]: I0202 08:26:24.709155 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sw8ll_7ea1df1c-0a15-44a8-9bb6-9f4513c3b482/extract-content/0.log" Feb 02 08:26:24 crc kubenswrapper[4842]: I0202 08:26:24.803762 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sw8ll_7ea1df1c-0a15-44a8-9bb6-9f4513c3b482/extract-content/0.log" Feb 02 08:26:24 crc kubenswrapper[4842]: I0202 08:26:24.966827 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sw8ll_7ea1df1c-0a15-44a8-9bb6-9f4513c3b482/extract-utilities/0.log" Feb 02 08:26:25 crc kubenswrapper[4842]: I0202 08:26:25.028234 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l6tg7_23620448-86fc-4fa7-9295-d9ce6de9b8e6/extract-utilities/0.log" Feb 02 08:26:25 crc kubenswrapper[4842]: I0202 08:26:25.059283 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sw8ll_7ea1df1c-0a15-44a8-9bb6-9f4513c3b482/extract-content/0.log" Feb 02 08:26:25 crc kubenswrapper[4842]: I0202 08:26:25.120909 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sw8ll_7ea1df1c-0a15-44a8-9bb6-9f4513c3b482/registry-server/0.log" Feb 02 08:26:25 crc kubenswrapper[4842]: I0202 08:26:25.280661 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l6tg7_23620448-86fc-4fa7-9295-d9ce6de9b8e6/extract-utilities/0.log" Feb 02 08:26:25 crc kubenswrapper[4842]: I0202 08:26:25.293013 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l6tg7_23620448-86fc-4fa7-9295-d9ce6de9b8e6/extract-content/0.log" Feb 02 08:26:25 crc kubenswrapper[4842]: I0202 08:26:25.321732 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l6tg7_23620448-86fc-4fa7-9295-d9ce6de9b8e6/extract-content/0.log" Feb 02 08:26:25 crc kubenswrapper[4842]: I0202 08:26:25.448695 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l6tg7_23620448-86fc-4fa7-9295-d9ce6de9b8e6/extract-utilities/0.log" Feb 02 08:26:25 crc kubenswrapper[4842]: I0202 08:26:25.467191 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l6tg7_23620448-86fc-4fa7-9295-d9ce6de9b8e6/extract-content/0.log" Feb 02 08:26:26 crc kubenswrapper[4842]: I0202 08:26:26.139958 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l6tg7_23620448-86fc-4fa7-9295-d9ce6de9b8e6/registry-server/0.log" Feb 02 08:26:42 crc kubenswrapper[4842]: I0202 08:26:42.146096 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:26:42 crc kubenswrapper[4842]: I0202 08:26:42.146608 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:27:12 crc kubenswrapper[4842]: I0202 08:27:12.146789 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:27:12 crc kubenswrapper[4842]: I0202 08:27:12.147479 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:27:36 crc kubenswrapper[4842]: I0202 08:27:36.970356 4842 generic.go:334] "Generic (PLEG): container finished" podID="0d2d69ec-05f0-4d32-9003-71634c635ab6" containerID="e6a8709be9b242969c88ec63b2238ff790746bf3bcc9e5f6c743f53912a02b12" exitCode=0 Feb 02 08:27:36 crc kubenswrapper[4842]: I0202 08:27:36.970517 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qzj89/must-gather-9skzq" event={"ID":"0d2d69ec-05f0-4d32-9003-71634c635ab6","Type":"ContainerDied","Data":"e6a8709be9b242969c88ec63b2238ff790746bf3bcc9e5f6c743f53912a02b12"} Feb 02 08:27:36 crc kubenswrapper[4842]: I0202 08:27:36.974640 4842 scope.go:117] "RemoveContainer" containerID="e6a8709be9b242969c88ec63b2238ff790746bf3bcc9e5f6c743f53912a02b12" Feb 02 08:27:37 crc kubenswrapper[4842]: I0202 08:27:37.565637 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qzj89_must-gather-9skzq_0d2d69ec-05f0-4d32-9003-71634c635ab6/gather/0.log" Feb 02 08:27:42 crc kubenswrapper[4842]: I0202 08:27:42.146760 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:27:42 crc kubenswrapper[4842]: I0202 08:27:42.147619 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:27:42 crc kubenswrapper[4842]: I0202 08:27:42.147703 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 08:27:42 crc kubenswrapper[4842]: I0202 08:27:42.148932 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"701d661caf384deb6b8444b74ed46fa7b3bf20ba994db92caac6b1a337d1e11f"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 08:27:42 crc kubenswrapper[4842]: I0202 08:27:42.149052 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://701d661caf384deb6b8444b74ed46fa7b3bf20ba994db92caac6b1a337d1e11f" gracePeriod=600 Feb 02 08:27:43 crc kubenswrapper[4842]: I0202 08:27:43.021723 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="701d661caf384deb6b8444b74ed46fa7b3bf20ba994db92caac6b1a337d1e11f" exitCode=0 Feb 02 08:27:43 crc kubenswrapper[4842]: I0202 08:27:43.021817 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"701d661caf384deb6b8444b74ed46fa7b3bf20ba994db92caac6b1a337d1e11f"} Feb 02 08:27:43 crc kubenswrapper[4842]: I0202 08:27:43.023037 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"f01c1d4f45a6891b006202538e283e03c804cd552c7b9e7ccd0a0ff087cc1df5"} Feb 02 08:27:43 crc kubenswrapper[4842]: I0202 08:27:43.023088 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:27:44 crc kubenswrapper[4842]: I0202 08:27:44.886404 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qzj89/must-gather-9skzq"] Feb 02 08:27:44 crc kubenswrapper[4842]: I0202 08:27:44.886980 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-qzj89/must-gather-9skzq" podUID="0d2d69ec-05f0-4d32-9003-71634c635ab6" containerName="copy" containerID="cri-o://b501e90b320415eedc57d5d97621c4286482ad34559763a80d58ed79fe0c298d" gracePeriod=2 Feb 02 08:27:44 crc kubenswrapper[4842]: I0202 08:27:44.892384 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qzj89/must-gather-9skzq"] Feb 02 08:27:45 crc kubenswrapper[4842]: I0202 08:27:45.041457 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qzj89_must-gather-9skzq_0d2d69ec-05f0-4d32-9003-71634c635ab6/copy/0.log" Feb 02 08:27:45 crc kubenswrapper[4842]: I0202 08:27:45.042022 4842 generic.go:334] "Generic (PLEG): container finished" podID="0d2d69ec-05f0-4d32-9003-71634c635ab6" containerID="b501e90b320415eedc57d5d97621c4286482ad34559763a80d58ed79fe0c298d" exitCode=143 Feb 02 08:27:45 crc kubenswrapper[4842]: I0202 08:27:45.356066 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qzj89_must-gather-9skzq_0d2d69ec-05f0-4d32-9003-71634c635ab6/copy/0.log" Feb 02 08:27:45 crc kubenswrapper[4842]: I0202 08:27:45.356970 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qzj89/must-gather-9skzq" Feb 02 08:27:45 crc kubenswrapper[4842]: I0202 08:27:45.492155 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0d2d69ec-05f0-4d32-9003-71634c635ab6-must-gather-output\") pod \"0d2d69ec-05f0-4d32-9003-71634c635ab6\" (UID: \"0d2d69ec-05f0-4d32-9003-71634c635ab6\") " Feb 02 08:27:45 crc kubenswrapper[4842]: I0202 08:27:45.492294 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcjz8\" (UniqueName: \"kubernetes.io/projected/0d2d69ec-05f0-4d32-9003-71634c635ab6-kube-api-access-kcjz8\") pod \"0d2d69ec-05f0-4d32-9003-71634c635ab6\" (UID: \"0d2d69ec-05f0-4d32-9003-71634c635ab6\") " Feb 02 08:27:45 crc kubenswrapper[4842]: I0202 08:27:45.497925 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d2d69ec-05f0-4d32-9003-71634c635ab6-kube-api-access-kcjz8" (OuterVolumeSpecName: "kube-api-access-kcjz8") pod "0d2d69ec-05f0-4d32-9003-71634c635ab6" (UID: "0d2d69ec-05f0-4d32-9003-71634c635ab6"). InnerVolumeSpecName "kube-api-access-kcjz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:27:45 crc kubenswrapper[4842]: I0202 08:27:45.588154 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d2d69ec-05f0-4d32-9003-71634c635ab6-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "0d2d69ec-05f0-4d32-9003-71634c635ab6" (UID: "0d2d69ec-05f0-4d32-9003-71634c635ab6"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:27:45 crc kubenswrapper[4842]: I0202 08:27:45.593984 4842 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0d2d69ec-05f0-4d32-9003-71634c635ab6-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 02 08:27:45 crc kubenswrapper[4842]: I0202 08:27:45.594129 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcjz8\" (UniqueName: \"kubernetes.io/projected/0d2d69ec-05f0-4d32-9003-71634c635ab6-kube-api-access-kcjz8\") on node \"crc\" DevicePath \"\"" Feb 02 08:27:46 crc kubenswrapper[4842]: I0202 08:27:46.049640 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qzj89_must-gather-9skzq_0d2d69ec-05f0-4d32-9003-71634c635ab6/copy/0.log" Feb 02 08:27:46 crc kubenswrapper[4842]: I0202 08:27:46.051128 4842 scope.go:117] "RemoveContainer" containerID="b501e90b320415eedc57d5d97621c4286482ad34559763a80d58ed79fe0c298d" Feb 02 08:27:46 crc kubenswrapper[4842]: I0202 08:27:46.051211 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qzj89/must-gather-9skzq" Feb 02 08:27:46 crc kubenswrapper[4842]: I0202 08:27:46.068493 4842 scope.go:117] "RemoveContainer" containerID="e6a8709be9b242969c88ec63b2238ff790746bf3bcc9e5f6c743f53912a02b12" Feb 02 08:27:47 crc kubenswrapper[4842]: I0202 08:27:47.466734 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d2d69ec-05f0-4d32-9003-71634c635ab6" path="/var/lib/kubelet/pods/0d2d69ec-05f0-4d32-9003-71634c635ab6/volumes" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.005542 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t4q45"] Feb 02 08:28:16 crc kubenswrapper[4842]: E0202 08:28:16.007043 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="524ee812-fd5b-4a94-b4e7-6a26c9e52e7f" containerName="extract-utilities" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.007077 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="524ee812-fd5b-4a94-b4e7-6a26c9e52e7f" containerName="extract-utilities" Feb 02 08:28:16 crc kubenswrapper[4842]: E0202 08:28:16.007113 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d2d69ec-05f0-4d32-9003-71634c635ab6" containerName="gather" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.007131 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d2d69ec-05f0-4d32-9003-71634c635ab6" containerName="gather" Feb 02 08:28:16 crc kubenswrapper[4842]: E0202 08:28:16.007156 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="524ee812-fd5b-4a94-b4e7-6a26c9e52e7f" containerName="registry-server" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.007175 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="524ee812-fd5b-4a94-b4e7-6a26c9e52e7f" containerName="registry-server" Feb 02 08:28:16 crc kubenswrapper[4842]: E0202 08:28:16.007250 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="524ee812-fd5b-4a94-b4e7-6a26c9e52e7f" containerName="extract-content" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.007271 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="524ee812-fd5b-4a94-b4e7-6a26c9e52e7f" containerName="extract-content" Feb 02 08:28:16 crc kubenswrapper[4842]: E0202 08:28:16.007298 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d2d69ec-05f0-4d32-9003-71634c635ab6" containerName="copy" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.007316 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d2d69ec-05f0-4d32-9003-71634c635ab6" containerName="copy" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.007704 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="524ee812-fd5b-4a94-b4e7-6a26c9e52e7f" containerName="registry-server" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.007746 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d2d69ec-05f0-4d32-9003-71634c635ab6" containerName="copy" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.007808 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d2d69ec-05f0-4d32-9003-71634c635ab6" containerName="gather" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.010091 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.022817 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t4q45"] Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.201151 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ca6a629-8605-4947-ab91-0a91b960ae4d-catalog-content\") pod \"community-operators-t4q45\" (UID: \"5ca6a629-8605-4947-ab91-0a91b960ae4d\") " pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.201321 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjkzj\" (UniqueName: \"kubernetes.io/projected/5ca6a629-8605-4947-ab91-0a91b960ae4d-kube-api-access-tjkzj\") pod \"community-operators-t4q45\" (UID: \"5ca6a629-8605-4947-ab91-0a91b960ae4d\") " pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.201436 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ca6a629-8605-4947-ab91-0a91b960ae4d-utilities\") pod \"community-operators-t4q45\" (UID: \"5ca6a629-8605-4947-ab91-0a91b960ae4d\") " pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.302748 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjkzj\" (UniqueName: \"kubernetes.io/projected/5ca6a629-8605-4947-ab91-0a91b960ae4d-kube-api-access-tjkzj\") pod \"community-operators-t4q45\" (UID: \"5ca6a629-8605-4947-ab91-0a91b960ae4d\") " pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.302888 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ca6a629-8605-4947-ab91-0a91b960ae4d-utilities\") pod \"community-operators-t4q45\" (UID: \"5ca6a629-8605-4947-ab91-0a91b960ae4d\") " pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.302991 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ca6a629-8605-4947-ab91-0a91b960ae4d-catalog-content\") pod \"community-operators-t4q45\" (UID: \"5ca6a629-8605-4947-ab91-0a91b960ae4d\") " pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.303463 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ca6a629-8605-4947-ab91-0a91b960ae4d-utilities\") pod \"community-operators-t4q45\" (UID: \"5ca6a629-8605-4947-ab91-0a91b960ae4d\") " pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.303548 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ca6a629-8605-4947-ab91-0a91b960ae4d-catalog-content\") pod \"community-operators-t4q45\" (UID: \"5ca6a629-8605-4947-ab91-0a91b960ae4d\") " pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.329924 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjkzj\" (UniqueName: \"kubernetes.io/projected/5ca6a629-8605-4947-ab91-0a91b960ae4d-kube-api-access-tjkzj\") pod \"community-operators-t4q45\" (UID: \"5ca6a629-8605-4947-ab91-0a91b960ae4d\") " pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.336930 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.820115 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t4q45"] Feb 02 08:28:17 crc kubenswrapper[4842]: I0202 08:28:17.320083 4842 generic.go:334] "Generic (PLEG): container finished" podID="5ca6a629-8605-4947-ab91-0a91b960ae4d" containerID="87505d1d8d5aac6ffec05c084e84a766abdecc06bbb9e48ffcc8ed8218ccbfa8" exitCode=0 Feb 02 08:28:17 crc kubenswrapper[4842]: I0202 08:28:17.320126 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t4q45" event={"ID":"5ca6a629-8605-4947-ab91-0a91b960ae4d","Type":"ContainerDied","Data":"87505d1d8d5aac6ffec05c084e84a766abdecc06bbb9e48ffcc8ed8218ccbfa8"} Feb 02 08:28:17 crc kubenswrapper[4842]: I0202 08:28:17.320157 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t4q45" event={"ID":"5ca6a629-8605-4947-ab91-0a91b960ae4d","Type":"ContainerStarted","Data":"e42607ea0f6d3823bc1171920d5109e1351ec0075eb8ab4a58b83dc6b1509c46"} Feb 02 08:28:17 crc kubenswrapper[4842]: I0202 08:28:17.322254 4842 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 08:28:18 crc kubenswrapper[4842]: I0202 08:28:18.327890 4842 generic.go:334] "Generic (PLEG): container finished" podID="5ca6a629-8605-4947-ab91-0a91b960ae4d" containerID="0fb154988ca5730623d9730ee0a05e01116bf37369ed50109c1aa9e4fda75cc0" exitCode=0 Feb 02 08:28:18 crc kubenswrapper[4842]: I0202 08:28:18.327966 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t4q45" event={"ID":"5ca6a629-8605-4947-ab91-0a91b960ae4d","Type":"ContainerDied","Data":"0fb154988ca5730623d9730ee0a05e01116bf37369ed50109c1aa9e4fda75cc0"} Feb 02 08:28:19 crc kubenswrapper[4842]: I0202 08:28:19.340649 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t4q45" event={"ID":"5ca6a629-8605-4947-ab91-0a91b960ae4d","Type":"ContainerStarted","Data":"36535fa55f952d763a4d4e1704726c72236a829e17b55c06328ff3a50a69daa4"} Feb 02 08:28:19 crc kubenswrapper[4842]: I0202 08:28:19.379206 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t4q45" podStartSLOduration=2.931236997 podStartE2EDuration="4.379187848s" podCreationTimestamp="2026-02-02 08:28:15 +0000 UTC" firstStartedPulling="2026-02-02 08:28:17.321970688 +0000 UTC m=+6122.699238610" lastFinishedPulling="2026-02-02 08:28:18.769921539 +0000 UTC m=+6124.147189461" observedRunningTime="2026-02-02 08:28:19.376671546 +0000 UTC m=+6124.753939458" watchObservedRunningTime="2026-02-02 08:28:19.379187848 +0000 UTC m=+6124.756455760" Feb 02 08:28:26 crc kubenswrapper[4842]: I0202 08:28:26.338005 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:26 crc kubenswrapper[4842]: I0202 08:28:26.338589 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:26 crc kubenswrapper[4842]: I0202 08:28:26.398284 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:26 crc kubenswrapper[4842]: I0202 08:28:26.464774 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:26 crc kubenswrapper[4842]: I0202 08:28:26.658458 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t4q45"] Feb 02 08:28:28 crc kubenswrapper[4842]: I0202 08:28:28.413570 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-t4q45" podUID="5ca6a629-8605-4947-ab91-0a91b960ae4d" containerName="registry-server" containerID="cri-o://36535fa55f952d763a4d4e1704726c72236a829e17b55c06328ff3a50a69daa4" gracePeriod=2 Feb 02 08:28:28 crc kubenswrapper[4842]: I0202 08:28:28.895259 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.010415 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ca6a629-8605-4947-ab91-0a91b960ae4d-catalog-content\") pod \"5ca6a629-8605-4947-ab91-0a91b960ae4d\" (UID: \"5ca6a629-8605-4947-ab91-0a91b960ae4d\") " Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.010522 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ca6a629-8605-4947-ab91-0a91b960ae4d-utilities\") pod \"5ca6a629-8605-4947-ab91-0a91b960ae4d\" (UID: \"5ca6a629-8605-4947-ab91-0a91b960ae4d\") " Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.010581 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjkzj\" (UniqueName: \"kubernetes.io/projected/5ca6a629-8605-4947-ab91-0a91b960ae4d-kube-api-access-tjkzj\") pod \"5ca6a629-8605-4947-ab91-0a91b960ae4d\" (UID: \"5ca6a629-8605-4947-ab91-0a91b960ae4d\") " Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.011922 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ca6a629-8605-4947-ab91-0a91b960ae4d-utilities" (OuterVolumeSpecName: "utilities") pod "5ca6a629-8605-4947-ab91-0a91b960ae4d" (UID: "5ca6a629-8605-4947-ab91-0a91b960ae4d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.023358 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ca6a629-8605-4947-ab91-0a91b960ae4d-kube-api-access-tjkzj" (OuterVolumeSpecName: "kube-api-access-tjkzj") pod "5ca6a629-8605-4947-ab91-0a91b960ae4d" (UID: "5ca6a629-8605-4947-ab91-0a91b960ae4d"). InnerVolumeSpecName "kube-api-access-tjkzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.089783 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ca6a629-8605-4947-ab91-0a91b960ae4d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ca6a629-8605-4947-ab91-0a91b960ae4d" (UID: "5ca6a629-8605-4947-ab91-0a91b960ae4d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.112371 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjkzj\" (UniqueName: \"kubernetes.io/projected/5ca6a629-8605-4947-ab91-0a91b960ae4d-kube-api-access-tjkzj\") on node \"crc\" DevicePath \"\"" Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.112419 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ca6a629-8605-4947-ab91-0a91b960ae4d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.112430 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ca6a629-8605-4947-ab91-0a91b960ae4d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.426425 4842 generic.go:334] "Generic (PLEG): container finished" podID="5ca6a629-8605-4947-ab91-0a91b960ae4d" containerID="36535fa55f952d763a4d4e1704726c72236a829e17b55c06328ff3a50a69daa4" exitCode=0 Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.426506 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t4q45" event={"ID":"5ca6a629-8605-4947-ab91-0a91b960ae4d","Type":"ContainerDied","Data":"36535fa55f952d763a4d4e1704726c72236a829e17b55c06328ff3a50a69daa4"} Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.426660 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.427762 4842 scope.go:117] "RemoveContainer" containerID="36535fa55f952d763a4d4e1704726c72236a829e17b55c06328ff3a50a69daa4" Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.427735 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t4q45" event={"ID":"5ca6a629-8605-4947-ab91-0a91b960ae4d","Type":"ContainerDied","Data":"e42607ea0f6d3823bc1171920d5109e1351ec0075eb8ab4a58b83dc6b1509c46"} Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.451882 4842 scope.go:117] "RemoveContainer" containerID="0fb154988ca5730623d9730ee0a05e01116bf37369ed50109c1aa9e4fda75cc0" Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.502279 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t4q45"] Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.510710 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-t4q45"] Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.528208 4842 scope.go:117] "RemoveContainer" containerID="87505d1d8d5aac6ffec05c084e84a766abdecc06bbb9e48ffcc8ed8218ccbfa8" Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.548234 4842 scope.go:117] "RemoveContainer" containerID="36535fa55f952d763a4d4e1704726c72236a829e17b55c06328ff3a50a69daa4" Feb 02 08:28:29 crc kubenswrapper[4842]: E0202 08:28:29.548732 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36535fa55f952d763a4d4e1704726c72236a829e17b55c06328ff3a50a69daa4\": container with ID starting with 36535fa55f952d763a4d4e1704726c72236a829e17b55c06328ff3a50a69daa4 not found: ID does not exist" containerID="36535fa55f952d763a4d4e1704726c72236a829e17b55c06328ff3a50a69daa4" Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.548773 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36535fa55f952d763a4d4e1704726c72236a829e17b55c06328ff3a50a69daa4"} err="failed to get container status \"36535fa55f952d763a4d4e1704726c72236a829e17b55c06328ff3a50a69daa4\": rpc error: code = NotFound desc = could not find container \"36535fa55f952d763a4d4e1704726c72236a829e17b55c06328ff3a50a69daa4\": container with ID starting with 36535fa55f952d763a4d4e1704726c72236a829e17b55c06328ff3a50a69daa4 not found: ID does not exist" Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.548794 4842 scope.go:117] "RemoveContainer" containerID="0fb154988ca5730623d9730ee0a05e01116bf37369ed50109c1aa9e4fda75cc0" Feb 02 08:28:29 crc kubenswrapper[4842]: E0202 08:28:29.549254 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fb154988ca5730623d9730ee0a05e01116bf37369ed50109c1aa9e4fda75cc0\": container with ID starting with 0fb154988ca5730623d9730ee0a05e01116bf37369ed50109c1aa9e4fda75cc0 not found: ID does not exist" containerID="0fb154988ca5730623d9730ee0a05e01116bf37369ed50109c1aa9e4fda75cc0" Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.549366 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fb154988ca5730623d9730ee0a05e01116bf37369ed50109c1aa9e4fda75cc0"} err="failed to get container status \"0fb154988ca5730623d9730ee0a05e01116bf37369ed50109c1aa9e4fda75cc0\": rpc error: code = NotFound desc = could not find container \"0fb154988ca5730623d9730ee0a05e01116bf37369ed50109c1aa9e4fda75cc0\": container with ID starting with 0fb154988ca5730623d9730ee0a05e01116bf37369ed50109c1aa9e4fda75cc0 not found: ID does not exist" Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.549483 4842 scope.go:117] "RemoveContainer" containerID="87505d1d8d5aac6ffec05c084e84a766abdecc06bbb9e48ffcc8ed8218ccbfa8" Feb 02 08:28:29 crc kubenswrapper[4842]: E0202 08:28:29.549890 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87505d1d8d5aac6ffec05c084e84a766abdecc06bbb9e48ffcc8ed8218ccbfa8\": container with ID starting with 87505d1d8d5aac6ffec05c084e84a766abdecc06bbb9e48ffcc8ed8218ccbfa8 not found: ID does not exist" containerID="87505d1d8d5aac6ffec05c084e84a766abdecc06bbb9e48ffcc8ed8218ccbfa8" Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.549910 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87505d1d8d5aac6ffec05c084e84a766abdecc06bbb9e48ffcc8ed8218ccbfa8"} err="failed to get container status \"87505d1d8d5aac6ffec05c084e84a766abdecc06bbb9e48ffcc8ed8218ccbfa8\": rpc error: code = NotFound desc = could not find container \"87505d1d8d5aac6ffec05c084e84a766abdecc06bbb9e48ffcc8ed8218ccbfa8\": container with ID starting with 87505d1d8d5aac6ffec05c084e84a766abdecc06bbb9e48ffcc8ed8218ccbfa8 not found: ID does not exist" Feb 02 08:28:31 crc kubenswrapper[4842]: I0202 08:28:31.445279 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ca6a629-8605-4947-ab91-0a91b960ae4d" path="/var/lib/kubelet/pods/5ca6a629-8605-4947-ab91-0a91b960ae4d/volumes" Feb 02 08:29:37 crc kubenswrapper[4842]: I0202 08:29:37.885896 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bjrt2"] Feb 02 08:29:37 crc kubenswrapper[4842]: E0202 08:29:37.887029 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ca6a629-8605-4947-ab91-0a91b960ae4d" containerName="extract-content" Feb 02 08:29:37 crc kubenswrapper[4842]: I0202 08:29:37.887051 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ca6a629-8605-4947-ab91-0a91b960ae4d" containerName="extract-content" Feb 02 08:29:37 crc kubenswrapper[4842]: E0202 08:29:37.887076 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ca6a629-8605-4947-ab91-0a91b960ae4d" containerName="extract-utilities" Feb 02 08:29:37 crc kubenswrapper[4842]: I0202 08:29:37.887086 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ca6a629-8605-4947-ab91-0a91b960ae4d" containerName="extract-utilities" Feb 02 08:29:37 crc kubenswrapper[4842]: E0202 08:29:37.887107 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ca6a629-8605-4947-ab91-0a91b960ae4d" containerName="registry-server" Feb 02 08:29:37 crc kubenswrapper[4842]: I0202 08:29:37.887120 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ca6a629-8605-4947-ab91-0a91b960ae4d" containerName="registry-server" Feb 02 08:29:37 crc kubenswrapper[4842]: I0202 08:29:37.887368 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ca6a629-8605-4947-ab91-0a91b960ae4d" containerName="registry-server" Feb 02 08:29:37 crc kubenswrapper[4842]: I0202 08:29:37.888888 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bjrt2" Feb 02 08:29:37 crc kubenswrapper[4842]: I0202 08:29:37.905825 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bjrt2"] Feb 02 08:29:38 crc kubenswrapper[4842]: I0202 08:29:38.019863 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74e3e32c-fe39-4064-8d82-25720d7e23a3-utilities\") pod \"redhat-operators-bjrt2\" (UID: \"74e3e32c-fe39-4064-8d82-25720d7e23a3\") " pod="openshift-marketplace/redhat-operators-bjrt2" Feb 02 08:29:38 crc kubenswrapper[4842]: I0202 08:29:38.019926 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wsd9\" (UniqueName: \"kubernetes.io/projected/74e3e32c-fe39-4064-8d82-25720d7e23a3-kube-api-access-8wsd9\") pod \"redhat-operators-bjrt2\" (UID: \"74e3e32c-fe39-4064-8d82-25720d7e23a3\") " pod="openshift-marketplace/redhat-operators-bjrt2" Feb 02 08:29:38 crc kubenswrapper[4842]: I0202 08:29:38.020000 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74e3e32c-fe39-4064-8d82-25720d7e23a3-catalog-content\") pod \"redhat-operators-bjrt2\" (UID: \"74e3e32c-fe39-4064-8d82-25720d7e23a3\") " pod="openshift-marketplace/redhat-operators-bjrt2" Feb 02 08:29:38 crc kubenswrapper[4842]: I0202 08:29:38.121324 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74e3e32c-fe39-4064-8d82-25720d7e23a3-utilities\") pod \"redhat-operators-bjrt2\" (UID: \"74e3e32c-fe39-4064-8d82-25720d7e23a3\") " pod="openshift-marketplace/redhat-operators-bjrt2" Feb 02 08:29:38 crc kubenswrapper[4842]: I0202 08:29:38.121405 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wsd9\" (UniqueName: \"kubernetes.io/projected/74e3e32c-fe39-4064-8d82-25720d7e23a3-kube-api-access-8wsd9\") pod \"redhat-operators-bjrt2\" (UID: \"74e3e32c-fe39-4064-8d82-25720d7e23a3\") " pod="openshift-marketplace/redhat-operators-bjrt2" Feb 02 08:29:38 crc kubenswrapper[4842]: I0202 08:29:38.121481 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74e3e32c-fe39-4064-8d82-25720d7e23a3-catalog-content\") pod \"redhat-operators-bjrt2\" (UID: \"74e3e32c-fe39-4064-8d82-25720d7e23a3\") " pod="openshift-marketplace/redhat-operators-bjrt2" Feb 02 08:29:38 crc kubenswrapper[4842]: I0202 08:29:38.122061 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74e3e32c-fe39-4064-8d82-25720d7e23a3-catalog-content\") pod \"redhat-operators-bjrt2\" (UID: \"74e3e32c-fe39-4064-8d82-25720d7e23a3\") " pod="openshift-marketplace/redhat-operators-bjrt2" Feb 02 08:29:38 crc kubenswrapper[4842]: I0202 08:29:38.122107 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74e3e32c-fe39-4064-8d82-25720d7e23a3-utilities\") pod \"redhat-operators-bjrt2\" (UID: \"74e3e32c-fe39-4064-8d82-25720d7e23a3\") " pod="openshift-marketplace/redhat-operators-bjrt2" Feb 02 08:29:38 crc kubenswrapper[4842]: I0202 08:29:38.141931 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wsd9\" (UniqueName: \"kubernetes.io/projected/74e3e32c-fe39-4064-8d82-25720d7e23a3-kube-api-access-8wsd9\") pod \"redhat-operators-bjrt2\" (UID: \"74e3e32c-fe39-4064-8d82-25720d7e23a3\") " pod="openshift-marketplace/redhat-operators-bjrt2" Feb 02 08:29:38 crc kubenswrapper[4842]: I0202 08:29:38.218141 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bjrt2" Feb 02 08:29:38 crc kubenswrapper[4842]: I0202 08:29:38.681696 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bjrt2"] Feb 02 08:29:39 crc kubenswrapper[4842]: I0202 08:29:39.049431 4842 generic.go:334] "Generic (PLEG): container finished" podID="74e3e32c-fe39-4064-8d82-25720d7e23a3" containerID="a9529cb8628da9d301f481cc1aeb393f4776996a3b2b7ee6ce68fab6d5102a61" exitCode=0 Feb 02 08:29:39 crc kubenswrapper[4842]: I0202 08:29:39.049534 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bjrt2" event={"ID":"74e3e32c-fe39-4064-8d82-25720d7e23a3","Type":"ContainerDied","Data":"a9529cb8628da9d301f481cc1aeb393f4776996a3b2b7ee6ce68fab6d5102a61"} Feb 02 08:29:39 crc kubenswrapper[4842]: I0202 08:29:39.051884 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bjrt2" event={"ID":"74e3e32c-fe39-4064-8d82-25720d7e23a3","Type":"ContainerStarted","Data":"7e8d42d5b553d6226fc54f8d53a801fa505d6e4a86c12d3ef9b9e632e565ecb7"} Feb 02 08:29:41 crc kubenswrapper[4842]: I0202 08:29:41.071747 4842 generic.go:334] "Generic (PLEG): container finished" podID="74e3e32c-fe39-4064-8d82-25720d7e23a3" containerID="772314ec322feb1888a8d6ba7ca8203f5376518b679ed1132ccdfeb12bc07fbe" exitCode=0 Feb 02 08:29:41 crc kubenswrapper[4842]: I0202 08:29:41.072074 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bjrt2" event={"ID":"74e3e32c-fe39-4064-8d82-25720d7e23a3","Type":"ContainerDied","Data":"772314ec322feb1888a8d6ba7ca8203f5376518b679ed1132ccdfeb12bc07fbe"} Feb 02 08:29:42 crc kubenswrapper[4842]: I0202 08:29:42.084950 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bjrt2" event={"ID":"74e3e32c-fe39-4064-8d82-25720d7e23a3","Type":"ContainerStarted","Data":"2879985c1ce00aee5c3ce7da62e25c98102344357d85dd1ea2938ff9c57985fe"} Feb 02 08:29:42 crc kubenswrapper[4842]: I0202 08:29:42.111898 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bjrt2" podStartSLOduration=2.641197586 podStartE2EDuration="5.111881457s" podCreationTimestamp="2026-02-02 08:29:37 +0000 UTC" firstStartedPulling="2026-02-02 08:29:39.050939427 +0000 UTC m=+6204.428207339" lastFinishedPulling="2026-02-02 08:29:41.521623298 +0000 UTC m=+6206.898891210" observedRunningTime="2026-02-02 08:29:42.109370545 +0000 UTC m=+6207.486638527" watchObservedRunningTime="2026-02-02 08:29:42.111881457 +0000 UTC m=+6207.489149379" Feb 02 08:29:42 crc kubenswrapper[4842]: I0202 08:29:42.146521 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:29:42 crc kubenswrapper[4842]: I0202 08:29:42.146601 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:29:48 crc kubenswrapper[4842]: I0202 08:29:48.218550 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bjrt2" Feb 02 08:29:48 crc kubenswrapper[4842]: I0202 08:29:48.219451 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bjrt2" Feb 02 08:29:49 crc kubenswrapper[4842]: I0202 08:29:49.295346 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bjrt2" podUID="74e3e32c-fe39-4064-8d82-25720d7e23a3" containerName="registry-server" probeResult="failure" output=< Feb 02 08:29:49 crc kubenswrapper[4842]: timeout: failed to connect service ":50051" within 1s Feb 02 08:29:49 crc kubenswrapper[4842]: >